text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
The impact of $S$-wave thresholds $D_{s1}\bar{D}_{s}+c.c.$ and $D_{s0}\bar{D}^*_{s}+c.c.$ on vector charmonium spectrum By investigating the very closely lied $D_{s1}\bar{D}_{s}+c.c.$ and $D_{s0}\bar{D}^*_{s}+c.c.$ thresholds at about 4.43 GeV we propose that the $\psi(4415)$ and $\psi(4160)$ can be mixing states between the dynamic generated states of the strong $S$-wave $D_{s1}\bar{D}_{s}+c.c.$ and $D_{s0}\bar{D}^*_{s}+c.c.$ interactions and the quark model states $\psi(4S)$ and $\psi(2D)$. We investigate the $J/\psi K\bar{K}$ final states and invariant mass spectrum of $J/\psi K$ to demonstrate that nontrivial lineshapes can arise from such a mechanism. This process, which goes through triangle loop transitions, is located in the vicinity of the so-called"triangle singularity (TS)"kinematics. As a result, it provides a special mechanism for the production of exotic states $Z_{cs}$, which is the strange partner of $Z_c(3900)$, but with flavor contents of $c\bar{c}q\bar{s}$ (or $c\bar{c}s\bar{q}$) with $q$ denoting $u/d$ quarks. The lineshapes of the $e^+e^-\to J/\psi K\bar{K}$ cross sections and $J/\psi K \ (J/\psi \bar{K})$ spectrum are sensitive to the dynamically generated state, and we demonstrate that a pole structure can be easily distinguished from open threshold CUSP effects if an exotic state is created. A precise measurement of the cross section lineshapes can test such a mixing mechanism and provide navel information for the exotic partners of the $Z_c(3900)$ in the charmonium spectrum. I. INTRODUCTION During the past decade the observations of a large number of hadronic exotic candidates have initiated tremendous activities and efforts on understanding their dynamic nature in both experiment and theory. Most of these heavily flavored states which are tentatively named by "XY Z" are intimately related to some nearby S-wave thresholds. This seems to provide important clues for understanding their intrinsic structures. Typical examples include X(3872) and Z c (3900) [1] which are close to the DD * + c.c. threshold, and Z c (4020) [2] to the D * D * threshold. Their bottomed correspondences are Z b (10610) and Z b (10650) [3] which are located at the BB * + c.c. and B * B * thresholds, respectively. In the vector charmonium spectrum the mysterious Y (4260) seems to be closely related to the S-wave D 1 (2420)D+c.c. threshold in order to understand many new experimental observations of its exclusive decays. Recent studies indicate strong evidence for the hadronic molecule component of D 1 (2420)D + c.c. in its wavefunction while a compact core should also be present as the consequence of heavy quark spin symmetry (HQSS) breaking effects [4][5][6][7]. Following these interesting discoveries, many theoretical interpretations are proposed in the literature. Several recent review articles have given detailed discussions on the experimental status and theoretical models for these exotic candidates (see e.g. Ref. [8] for a review of hadronic molecules, Ref. [9] for a review of open charm/bottom system, Ref. [10] for a review of newly discovered states and the comparison with theoretical expectations, and Ref. [11] for a review of different progresses made in the heavy-quark exotics field). For the heavy quarkonium exotic meson candidates typical scenarios include hadrocharmonium [12], tetraquarks [13,14], loosely bound molecules [15] or hybrids [16]. Different kinds of kinematic effects related to these states were also discussed in the literature, such as CUSP effects [17][18][19][20], and triangle singularities [4,[21][22][23][24][25]. In Ref. [26], it was demonstrated that though CUSP effects can result in some structures, it is still not possible to produce pronounced, narrow near-threshold peaks without introducing physical poles. In contrast, the triangle singularity mechanism makes it possible to enhance threshold structures on top of a pole. Special features arising from such a mechanism have attracted a lot of attention in the understanding of many threshold phenomena [4,[27][28][29][30][31]. In Refs. [32,33], a practical parametrization for the line shapes of the near-threshold states is proposed. Based on the Lippmann-Schwinger equations for the coupled channel problem, this approach incorporates the inelastic channels additively with the unitarity and analyticity constraints for the t matrix. In this work, we investigate the very closely lied D s1Ds + c.c. and D s0D * s + c.c. thresholds which are located between two nearby charmonia states in the quark model. For the convenience we note these two thresholds by D s1Ds and D s0D * s as follows in this work. We study the mixing mechanism between two nearby quark model states through these two thresholds and the dynamically generated states which can be possibly related to ψ(4415) and ψ(4160) [46]. Similar to the production process of Z c (3900) in e + e − → Y (4260) → J/ψππ where the S-wave threshold D 1 (2420)D + c.c. plays a crucial role for understanding the properties of Y (4260) and Z c (3900), the process e + e − → J/ψKK around the mass region of the thresholds of D s1Ds and D s0D * s may provide important clues for understanding the nearby ψ(4415). In 2007, Belle Collaboration investigated the J/ψK + K − final states in e + e − annihilations via the initial-state radiation (ISR) from threshold to the center of mass (c.m.) energy of 6.0 GeV [34]. The measured cross sections seemed to be improved with the inclusion of a coherent ψ(4415). However, the limited statistics did not allow a conclusion on the detailed properties of ψ(4415). With the possible correlations with the S-wave D s1Ds and D s0D * s thresholds, the J/ψKK decay channel may shed a light on the structure of ψ(4415). Interesting issues that can also be investigated in e + e − → J/ψKK are the role played by the triangle singularity (TS) mechanism, and possible production of exotic states which can couple to D sD * + c.c. and D * sD + c.c. and contain at least four quarks in their wavefunctions. This is a mechanism similar to the production of Z c (3900) as proposed in Ref. [4]. In Ref. [25], the TS mechanism corresponding to similar charmed-strange meson thresholds but with final states of J/ψ and a hidden ss is also investigated. To be more specific, given that the initial vector states can first couple to D s1Ds or D s0D * s , the intermediate D s1 or D s0 can then rescatter againstD s orD * s by exchanging D * or D, respectively, before converting into a Kaon, and then the interactions between the exchanged D * (or D) andD s (orD * s ) will form J/ψ and an anti-Kaon. Such a transition is via a triangle diagram, and for specific kinematics all these three internal particles may approach their on-shell conditions simultaneously. Such a kinematic condition is called the TS condition and it brings the leading singular amplitude to the loops. Actually, around the mass region of ψ(4415), the kinematics are close to the TS condition and special phenomena are expected to show up that can be explored in experiment. Moreover, in case that exotic states can be formed by the S-wave interaction between D sD * + c.c. (and/or D * sD + c.c.) meson pairs, nontrivial linehsape in the invariant mass spectrum of J/ψK (J/ψK) is also expected. As follows, we first present the formalism for the dynamically generated states due to the strong S-wave couplings to D s1Ds and D s0D * s in Section II. We then analyze the kinematics of the triangle loops in e + e − → J/ψKK and present the calculation results in Section III with discussions. A brief summary will be given in the last Section. II. DYNAMICALLY GENERATED STATES The mass thresholds for both D s1Ds and D s0D * s lie at about 4.43 GeV (which are 4.428 GeV and 4.429 GeV, respectively), implying the nearly equal spin splitting of mass in the (1/2) + and (1/2) − doublets which also happens in the beauty-strange excited meson pairs [35]. Two charmonia, ψ(4S) and ψ(2D), in the potential quark model with the masses close to these two thresholds can couple to them via an S-wave interaction. Given sufficiently strong couplings, it may dynamically generate pole states near these thresholds and result in mixings between the quark model states and the dynamically generated states through the intermediate D s1Ds and D s0D * s bubbles as shown in Fig 1. To investigate such a possible scenario, we construct the propagators of ψ(4S) and ψ(2D) in a coupled-channel approach [36] as the following: where D 1 and D 2 are the denominators of the single propagator of ψ(4S) and ψ(2D), respectively, and D 12 is the mixing term between them through the D s1Ds and D s0D * s bubble diagrams. So here we have where B is the sum of the two amplitudes of the bubble diagrams of D s1Ds and D s0D * s between two states. Since ψ(4S) and ψ(2D) both couple to D s1Ds and D s0D * s in an S-wave, we have where g 1 (g 2 ) is the bare coupling for ψ(4S) (ψ(2D)) to these two thresholds, D s1Ds or D s0D * s ; I 20 (P, m a , m b ) is the two-point loop integral with the initial energy P and intermediate particle masses m a and m b . To remove the divergent part of this integral, we adopted an exponential momentum-dependent form factor exp(−2 l 2 /Λ 2 ) where l is the momentum of the particles in the loop. More details about the integral can be found in Appendix A. The corresponding physical states |A and |B can be expressed as mixtures of the quark model states |a and |b with a mixing matrix, i.e. With the mixing matrix R(θ, φ) the physical propagator matrix G 12p can be related to G 12 by The physical propagator matrix G 12p should be a diagonal matrix. So we can search for the physical poles in the propagator matrix G by requiring det[G 12 ] = 0. The coupling constants g 1 and g 2 are unknown parameters in matrix G 12 which can be determined by requiring the physical poles located at the masses of the observed states. At this moment, we treat g 1 = g 2 ≡ g for simplicity. It has been studied in the literature that the D s1 (2460) is a mixed state of 1 P 1 and 3 P 1 with compatible strength. This allows that the D s1Ds pair can couple to the S and D-wave charmonia with similar coupling strengths in the S wave. This argument, in principle, does not apply to the ψ(2D) coupling to D s0D * which will be suppressed by the HQSS. However, as investigated broadly in the charmonium mass region, the HQSS is rather apparently broken [6]. Therefore, it is still reasonable to treat the coupling in an S wave as a leading approximation. To be more specific with our study here, we have assumed that the the couplings of the quark-model charmonium states, i.e. ψ(4S) and ψ(2D), to the nearby S-wave thresholds are strong enough for dynamically generating the FIG. 3: The sum of bubbles and bare propagators physical states ψ(4160) and ψ(4415). Whether this is a reasonable assumption can be examined by two aspects. The first one is the cross section lineshape in the vicinity of these two thresholds. Given the strong S-wave interactions with the nearby quark model states, the propagators cannot be described by a simple Breit-Wigner form. Thus, the cross section lineshape will appear to be nontrivial. The second aspect is the decay modes of such dynamically generated states. They will favor decay channels correlated with the threshold interactions. In this case, the reaction channel of e + e − → J/ψKK will be extremely interesting. Since we still lack experimental data for e + e − → J/ψKK in the vicinity between ψ(4160) and ψ(4415), the following strategy is adopted for investigating the underlying dynamics. By examining the movement of the pole positions of the physical states in terms of coupling g from 1 to 10 GeV −1/2 which is the typical coupling range for the S-wave coupling of charmonium-like states to heavy-light D mesons, we identify poles which can match the nearby charmonium states ψ(4160) and ψ(4415) with a reasonable coupling strength for g. We find that with g = 7 GeV where i/N s = i/N 1 + i/N 2 is the sum of the two bare propagators of ψ(4S) and ψ(2D) and G(E) is the sum of the two kinds of bubbles of D s1Ds and D s0D * s with bare couplings. So we have where Σ 1 = iG(E)(N 1 + N 2 )/(2N 2 m 1 ). E is the initial energy of ψ(4415) and m 1 is the bare mass of ψ(4S). By expanding the denominator of the propagator near the physical mass [7] we have = iZ where With this physical propagator of ψ(4415) and its strong coupling to the D s1Ds and D s0D * s thresholds we consider the e + e − → J/ψKK through triangle loops showed in Fig. 4(a) and (b). In these diagrams, the D s1 decays into D * K and D s0 decays into DK both in a relative S wave. Also, the scatterings ofD s D * andD * s D to J/ψK are also via an S wave. As pointed out earlier, the triangle transition is located in the vicinity of the kinematic condition for the TS. Therefore, it is necessary to investigate the kinematic effects arising from the TS mechanism and identify the dynamically generated states via the S-wave interactions with the nearby open thresholds. To proceed, we first give the corresponding Lagrangians for the S-wave coupling: where S and H represent the positive and negative parity charmed mesons, respectively; while Ψ is the field of the vector charmonium states and A is the chiral field. The explicit form of each field can be found in Appendix B. As emphasized before, we require that these couplings are within the natural scale and more stringent constraints can be imposed by future experimental measurements. Typical triangle diagrams for charmonium decays into J/ψKK are plotted in Fig. 4. For Fig. 4 (a), the intermediate D s1 plays a key role since it has a strong S-wave coupling to D * K. As broadly studied in the literature (see e.g. Ref. [8] and references therein for a recent review of hadronic molecules), the D s1 has been an ideal candidate for a D * K molecule. Although the mass of D s1 is slightly lower than the mass threshold of D * K which is 2.55 GeV, it has approached the TS kinematics closely with the initial energy also approaching the D s1Ds threshold. The presence of the TS also indicates that the dominant contributions from the triangle loop come from the kinematic region where all the internal states are approaching their on-shell condition simultaneously. One feature arising from the specific process under discussion is that the physical kinematic region for the TS is quite limited. As analyzed in Ref. [24], the physical region for the TS is related to the phase space of the intermediate state two-body decay, i.e. D s1 → D * K. Since the mass of D s1 is slightly lower than the D * K threshold, the contributions from the TS mechanism will be limited to a rather narrow kinematic region. But still, abnormal lineshape can be expected. Similar phenomenon happens with the D s0D * s (D) loop of Fig. 4 (b). Also, it should be mentioned that D s0 (2317) is an ideal candidate for the DK molecule (see Ref. [8] for a detailed review). Because of the lack of phase space for D s0 → DK, the TS kinematics will be restricted within a narrow physical region. But still, observable effects can be expected. The explicit amplitudes of the diagrams in Fig. 4 can be found in Appendix B. Before we come to the calculation results for the final state invariant mass spectra, we first examine the cross section lineshape for e + e − → J/ψKK around the mass of ψ(4415). In Fig. 5 the calculated cross sections are compared with the experimental data from [34]. The ψ(4415) as the dynamically generated state which mixes with the quark model state has a lineshape which is apparently deviated from the symmetric Breit-Wigner distribution. Although it is not conclusive from the present data quality, such an effect can be investigated at BESIII or future Belle-II. The invariant mass spectrum of the J/ψK is generally sensitive to the TS mechanism. We plot the J/ψK spectra in Fig. 6 (a) where contributions from Fig. 4 (a) and (b) are both included. Also, in order to see the evolution of the TS contributions in terms of the initial energy, we plot the spectra at several energy points from 4.5 to 4.8 GeV. One can see that a CUSP structure, which is located at the common threshold ofD s D * andD * s D, appears in the J/ψK invariant mass spectrum. It is difficult to find very clear pole-like structure when the initial mass energy √ s is just above the threshold of D s1Ds or D s0D * s . But as the initial energy of ψ(4415) increases from 4.5 GeV to 4.8 GeV, a peak-like structure near the threshold ofD s D * (D * s D) indeed becomes more obvious. Since the mass of D s1 is so close to the threshold of D * K, we also discussed the behavior of the spectrum when the mass of D s1 is shifted a little bit in Appendix C. In Ref. [26] it has been shown that lower order singularities than the TS would not produce narrow and pronounced peaks if the interactions between the rescattering hadrons are not strong enough. Similar phenomenon is observed here as shown by Fig. 6. Because of the limited phase space, the TS condition cannot be fully satisfied, thus, the nontrivial threshold structure appears as a CUSP effect instead of the typical narrow peak [47]. Thus, this process can serve as an ideal channel for the search for possible exotic candidates without ambiguities from the kinematic effects. In Fig. 6 (b) we show the calculations at the initial energy of 4.6 GeV but including explicitly a physical pole right at the mass of the threshold ofD s D * andD * s D, ∼ 3.98 GeV, with a typical width of 50 MeV. The consideration is that if there exists the strange partner Z cs of Z c (3900) as the hadronic molecules ofD s D * andD * s D, the pole structure near the open charm threshold will produce different lineshapes compared with the kinematic effects shown in Fig. 6 (a). Similar to the treatment of Refs. [4,6,7] the pole structure can be dynamically generated by the strongD s D * and D * s D interactions. Although the detailed dynamics need elaborate studies and are not going to be discussed here, we note that if any mechanism allows the formation of the exotic state with quark contents of ccqs (ccsq) in this process, the pole structure will appear explicitly in the J/ψK (J/ψK) invariant mass spectrum as the signature for a genuine state. The thin and broad solid lines in Fig. 6 (b) correspond to the pole coupled to J/ψK with couplings 0.25 and 0.5 of the nature scale, respectively. Compared with the other lines without the pole structure in the J/ψK invariant mass spectrum, it shows that the pole contributions and pure TS contributions behave quite differently. In this case, the TS mechanism can produce non-trivial lineshapes, but cannot produce predominant peaks at the threshold of J/ψK. If narrow and sharp-peaking structures are observed in the invariant mass spectrum of J/ψK, they can be confidently assigned as signatures for exotics. Also, note that the asymmetric lineshapes are because of the triangle function which will affect the formation of the exotic Z cs state. In this sense, this channel is ideal for testing the TS mechanism and searching for exotic candidates in e + e − annihilations. IV. SUMMARY In this work, we investigate phenomena arising from the possible strong couplings of the degenerate thresholds D s1Ds and D s0D * s which may lead to dynamically generated hadronic molecule states and mix with the nearby conventional charmonia ψ(4S) and ψ(2D). We find that such a mechanism may have observable effects on ψ(4415) with relatively large mixture of the D s1Ds and D s0D * s molecules, while its impact on ψ(4160) is relatively small. With the same coupling of ψ(4415) to D s1Ds and D s0D * s we study the J/ψK final state invariant mass spectrum in ψ(4415) → J/ψKK. It shows that nontrival lineshapes can be produced by the molecular nature of ψ(4415) in the invariant mass spectrum of J/ψK due to the presence of the TS mechanism. However, since the TS kinematic region is limited, it is unlikely that the TS mechanism along would generate peaking structures near the threshold of D * D s + c.c. and DD * s + c.c. This provides an ideal channel for testing the TS mechanism on the one hand, and on the other hand, pinning down the process which is sensitive to the production of exotic states Z cs near heavy flavor thresholds. We claim that any predominant peaking structure in the invariant mass spectrum of J/ψK should confirm its being a genuine state instead of kinematic effects. Experimental data from BESIII and Belle-II can help clarify such a phenomenon in the future. where a is light flavor index. We can then write the amplitude for both triangle diagrams in Fig. 4 as follows in the non-relativistic limit of the heavy mesons whereg denotes the product of all the coupling constants from the vertices in the triangle diagram, and I (0) is the triangle diagram integral: By defining s = P 2 , s 1 = m 2 K + K − and s 2 = m 2 J/ψK − , we will have and is the energy of the Kaon at the J/ψK vertex. The amplitude is a function depending on s, s 1 and s 2 , with s the initial energy squared. The total cross section can be obtained by integrating over s 1 and s 2 in their phase spaces, and the invariant mass spectrum of J/ψK − can be obtained by only integrating over s 1 . For a 3-point loop diagram showed in Fig. 7, the location of external momentum variables for different kinds of singularities are determined by the Landau Equation [37]. When all the three internal particles get on-shell simultaneously, it pinches the leading singularity of the triangle loop which corresponds to the TS. In Fig. 7, given the initial energy square s, s 2 ≡ (p b + p c ) 2 , and s 3 ≡ p 2 a , the location of the TS can be determined by solving the Landau Equation: with λ(x, y, z) = (x − y − z) 2 − 4yz. This is the solutions of s when we fix the masses of all internal particles and s 2 , s 3 . By exchanging s 1 and s 2 , we can obtain the similar solutions for the TS in s 2 , i.e., With the help of the single dispersion relation for the 3-point function, we learn that only s − or s − 2 corresponds to the TS solutions within the physical boundary [24]. The normal and singular thresholds for s and s 2 with s 3 fixed can be determined as It describes the motion of the singular thresholds of the TS on the complex plane. Namely, with the fixed s 3 and internal masses, when s reaches s N , s − 2 will access its critical threshold s 2C . Then, with the increase of s from s N to s C , s − 2 will move from s 2C to s 2N . This motion will pinch the singularity in the denominator of the dispersion relation, and the range of the motion reflects how significant the TS mechanism can contribute to the loop function. As discussed in Ref. [24], the phase space of internal particle m 2 decays into m a + m 1 is correlated with the magnitude of the TS. In our case one notices that m 2 ≃ m a + m 1 , which means that the TS will be suppressed, or the TS contribution will reduce to a lower order singularity similar to that arising from a two-body cut, i.e. a CUSP effect. To demonstrate this we plot the invariant mass spectrum for J/ψK in Fig. 8 at the normal threshold of the initial energy s = (m Ds1 + m Ds ) 2 for the contributions from Fig. 4 (a) in e + e − → J/ψKK. In order to fulfill the TS condition, we increse the D s1 mass m Ds1 by a value of ∆m to make it close to the critical threshold for s 2C , i.e. to satisfy the on-shell condition for the D sD * threshold. As shown by the solid curve in Fig. 8, the kinematics for the TS cannot be fulfilled since the mass of the D s1 is about 41 MeV below the threshold of (m D * + m K ). Therefore, the critical threshold s 2C does not show up in the invariant mass spectrum. By increasing the mass of D s1 by ∆m, the TS will move to the physical kinematic region and the TS effects become more and more important. As shown by the dot-dashed curves in Fig. 8, the TS can produce narrow strong peak at the vicinity of the D sD * threshold. This is a direct demonstration of the role play by the TS when the kinematics are close to the TS condition. It also shows that for the physical case under discussion the observation of predominant peaking structure in the invariant mass spectrum of J/ψK would imply the existence of a genuine threshold state produced via the triangle transition process.
6,120.8
2017-11-20T00:00:00.000
[ "Physics" ]
Model Expert System for Diagnosis of Covid-19 Using Naïve Bayes Classifier This paper offers an expert system model for COVID-19 diagnosis as an effort to overcome the spread of COVID-19 in Indonesia. The expert system model was built using the Naive Bayes Classifier method. Model development is carried out with preliminary research stages, data collection, analysis, model design, implementation, and testing. The data used to build and test the model comes from the health department and the acceleration of the Covid- 19 countermeasure group in Indonesia. The model was developed with a unified modeling language and a prototyping approach. Tests show that the developed COVID-19 diagnosis system expert model can diagnose COVID-19 based on the symptoms inputted by the user into the system. The application of the model produced in this study helps assist doctors in diagnosing COVID-19. Introduction A novel coronavirus which is label as Covid-19, caused an outbreak in the city of Wuhan, China, and it has further spread to other parts of China and many other countries in the world [1]. The common signs of infection include respiratory symptoms, fever, coughing, shortness of breath, and difficulty breathing. In more severe cases, the infection can cause pneumonia, acute respiratory syndrome, kidney failure, and even death. And technology is increasingly becoming a massive part of today's healthcare scenario. Technology has changed the way how patients communicate with the doctor and not only that, but also healthcare is administered [2]. The early prediction and evaluation of disease severity are extremely important for patient prognosis. But in this paper, we propose to investigate the Naïve Bayes event model for an expert system to diagnose Covid-19 because it has not been considered for this problem before. In response to the outbreak, we summarize the current knowledge of Covid-19 and compare it with previous experiences of the SARS outbreak in Hong Kong studying effective measures to control the Covid-19 epidemic [3]. In Indonesia, until March 17, 2020, there were 134 confirmed cases spread in eight provinces, namely Bali, Banten, DKI Jakarta, West Java, Central Java, West Kalimantan, North Sulawesi, and Yogyakarta [4]. In Indonesia, medical personnel who are experts in the field of Covid-19 disease are IOP Publishing doi:10.1088/1757-899X/1007/1/012067 2 still limited, both in terms of number and working time. In resolving the Covid-19 disease attack in various provinces and districts in Indonesia. So, an expert system for diagnosis Covid-19 disease is needed. It is expected to help the doctor to overcome the problem by providing a good solution. The process of making this expert system the Bayes Classifier theorem certainty method where the method is based on the initial conditions where the initial conditions are conditions of existing phenomena then subject to predetermined rules and then the greatest truth value is taken to determine the conclusions and solutions of the phenomena mentioned earlier. But the expert system research is about detecting Covid-19 disease, in contrast to the expert system using the Bayes Classifier theorem is to diagnose diseases in humans and the symptoms that cause these diseases. Based on the description above, the author is interested in making an expert system in the hope that making this expert system work with the system and benefit the community. The system can assist the doctor's work in dealing with special patients who have Covid-19 disease. Now, the computer-aided diagnostic system exists has a significant role in them. A faster and accurate system is necessary. So, in this research, we present our model an expert system for diagnosis of the Covid-19 epidemic in Indonesia. The rest of this paper is organized as follows. Section 2 presents a general overview of related work about expert systems in medical or disease, Naïve Bayes method in related research, and writing state of the art. Section 3, explains about research method that including the research framework and the Naïve Bayes Classifier attribute. In Section 4, presents about results and discusses experimental results. Finally, a general conclusion of this work is presented in Section 5. Related work. One of the major issues that need to be considered when talking about an expert system model problem is the fixed method of the model that developed. But, the other issues like fields and models also important to be considering. So, there are some related work about the application for medical fields, the expert system in the medical field, Naïve Bayes method, Covid-19, and so on. Including the research about Naïve Bayes Classifiers for authorships [5], artificial intelligence in the retina and beyond ocular disease [6], diagnosis of coronavirus disease [7], and predict the Covid-19 epidemic [1]. In this group, Naïve Bayes method for an expert system model for diagnosis the Covid-19 has not covered and discussed. The other research made reported that computer-aided diagnostic system exist has a significant role in them [8]. It used the classification and feature selection method for the medical aided system. The other journal published about retrieval-based diagnostic Aid [9], development and use of a clinical decision support system [10], and research about diagnostic for supporting primary health care system [2]. In this research group also has not discussed an expert system for diagnosis Covid-19 using Naïve Bayes Classifier. So, our research has state of the art in the field and the methods in the system. It was about an expert system for the diagnosis of Covid-19 using Naïve Bayes Classifier Research Framework For the research to be more directed, a research framework is needed as an illustration of the research conducted. The research framework used is the flow of software development, because this research produces a system that can be used by the community. The research framework has six stages. It is preliminary research, data collection, analysis, design, implementation, and testing. Stages of research carried out starting from preliminary research that is identifying problems by visiting the website www.who.int to look for common problems that occur about Covid-19 disease. After the problem is found, data collection is done by finding information from interviews with doctors to obtain valid knowledge. Then the data is analyzed and processed by using the Bayes Classifier method. The design carried out is to make use case diagrams as an illustration of the IOP Publishing doi:10.1088/1757-899X/1007/1/012067 3 relationship of actors with the system. The resulting diagram is translated into program form using the Java programming language and MySQL database. Finally, testing is done online so that everyone can consult with an expert system. Naive Bayes Classifier Naive Bayes Classifier is a probability classification based on the Bayes Theorem. Bayes' theorem will be combined with "Naïve" which means that each attribute or variable is independent. Naïve Bayes Classifier can be trained efficiently in supervised learning [11]. X : Data with unknown classes H : The data hypothesis is a specific class P(H/X) : The probability of hypothesis H is based on condition X (posterior probability) P(H) : Hypothesis probability H (prior probability) P(X/H) : Probability X is based on the conditions in hypothesis H P(X) : Probability X Characteristics of Naive Bayes Classifier [11]: The Naïve Bayes method works robustly on isolated data which is usually data with different characteristics (outliners). Naïve Bayes can also handle incorrect attribute values by ignoring training data during the model development and prediction process. Results And Discussion The problem discussed in this study is about Covid-19 disease. Data requirements in an expert system are data that is used in identifying problems as knowledge acquisition. The following are the data obtained from interviews with experts, regarding data on symptoms and types of diseases as well as the rules set forth addressed in Table 1, Table 2, and Table 3 [7]. The rules in Table 3 are developed using a similar approach in the other research about rules-based [12]. The rule base constructed in this research also can be used to construct of patient information track model [13]. Some symptoms of Covid-19 in Table 2 adopted from systematic disorder [14]. Systematic disorder has used in similar research about SAR but it also may use in the research about Covid-19 because both epidemic has some the same symptoms. In Covid-19, the systematic disorders including are fever, cough, and fatigue. Then, these rules are implemented to application likes in other research [15], and expert clinical decision support system [16]. From the 3 tables above as a reference for taking data samples in testing the calculation of manual Naïve Bayes Classifier. The test aims to determine the level of accuracy of the results of the calculation of the system diagnosis in this research which is calculated manually. Examples of calculations using the Naïve Bayes Classifier classification can be applied to those experiencing symptoms g2 = Fever or history of fever, g3 = Severe Pneumonia or Acute Respiratory Infections (ARI), g6 = In the last 14 days before the symptoms have a history of travel or stay in the local transmission area in Indonesia, g7 = Contact with Covid-19 confirmation cases in the last 14 days before symptoms. The steps for calculating the Naïve Bayes Classifier are as follows. The largest v value is 0.3400. It can be concluded that the user belongs to the PDP category. Design and Implementation The model of an expert system for diagnosis of Covid-19 was developed using the use case diagram. It was illustrated about the relationship between the actor and the system [17]. In this expert system, the actor is Manager of Expert System and Member. The implementation of model an expert system for diagnosis Covid-19 program in this research using Java programming and MySQL Server for database. The results of the expert system program, tested online by advising patients to consult with the expert system, to get an initial diagnosis of Covid-19 disease suffered by the patient. Figure 3 and 4 illustrates one of the expert system interfaces. Figure 4 is a consultation page, where on this page the patient enters the symptoms he feels. The results from this research in Figure 4 and Figure 5 are likes similar research to diagnose Covid-19 earlier and to improve its treatment by applying medical technology [18]. The following figure provides the results of the consultation carried out by the patient. The results from an expert system model for diagnosis Covid-19 in Figure 5 is an early diagnosis thus increasing the survival rate and preventing demolition cause due to Covid-19 like a similar research about diagnosis [9]. So, this research is a part of the research study which addresses the design of a decision support system for diagnosing the severity of SAD is recommended [10], and in general it using the concept of decision support system [19]. This paper demonstrated the simple expert system model, rule-based, and the dataset of newly diagnosed cases in Indonesia for diagnosis of Covid-19 using Naïve Bayes method. This research linkage between the data from the patients and the rule-based, which result in the system expert pages ( Figure 5). The result is a little answer which is in the other paper about the Covid-19 [1], and it can be used by the government in Indonesia to make some decisions about Covid-19. Conclusion This research has proposed a model expert system for the diagnosis of Covid-19 using Naïve Bayes Classifier. It is useful and can help the community in recognizing Covid-19 disease, especially Government in Indonesia. The application of the expert system in this research provided the information on early symptoms of Covid-19 disease in patients previously unknown to the general public. It can tracing and elucidate the results of the diagnosis and provide the better. The testing conducted by patients, the expert system for diagnostic Covid-19 results obtained from a disease based on facts.
2,718.8
2020-12-31T00:00:00.000
[ "Computer Science", "Medicine" ]
Suppressing MicroRNA-30b by Estrogen Promotes Osteogenesis in Bone Marrow Mesenchymal Stem Cells MicroRNAs (miRNAs) have been widely demonstrated to interact with multiple cellular signaling pathways and to participate in a wide range of physiological processes. Estradiol-17β (E2) is the most potent and prevalent endogenous estrogen that plays a vital role in promoting bone formation and reducing bone resorption. Currently, little is known about the regulation of miRNAs in E2-induced osteogenic differentiation. In the present study, the primary bone marrow mesenchymal stem cells from rats (rBMSCs) were isolated and incubated with E2, followed by miRNA profiling. The microarray showed that 29 miRNAs were differentially expressed in response to E2 stimulation. Further verification by real-time reverse-transcriptase polymerase chain reaction revealed that E2 enhanced the expression of let-7b and miR-25 but suppressed the miR-30b expression. Moreover, a gain-of-function experiment confirmed that miR-30b negatively regulated the E2-induced osteogenic differentiation. These data suggest an important role of miRNAs in osteogenic differentiation. Introduction Osteoporosis is a global public health problem and potentially causes serious fractures, disability, and chronic pain, thus leading to financial burdens for families and lower quality of life for individuals. Estrogen deficiency is one of the main causes of osteoporosis, especially in postmenopausal women [1]. As a steroid hormone, estrogen plays an important role in skeletal homeostasis. Bone remodeling is a process that relies on the dynamic equilibrium between osteoclasts and osteoblasts. It has been well established by both in vivo and in vitro studies that estrogen inhibits osteoclast formation [2]. Estrogen not only suppresses the formation but also promotes the apoptosis of osteoclasts by regulating the release of cytokines, including interleukin-1, interleukin-6, receptor activator of nuclear factor kappa-B ligand (RANKL), tumor necrosis factor-α (TNFα), osteoprotegerin (OPG), and macrophage colony-stimulating factor (MCSF) in the bone microenvironment [3][4][5]. Recently, several studies suggested that estrogen could also inhibit osteoblast apoptosis and promote osteoblast differentiation, thus protecting against bone loss [6,7]. Estradiol-17β (E2) is the most potent estrogen that could effectively improve bone mesenchymal stem cell (BMSC) proliferation and osteogenic differentiation in various species, including mouse, rat, and human [8][9][10]. However, the precise molecular mechanisms underlying the observed effects of E2 on osteoblast differentiation are still not fully understood. According to its ability to improve bone mineral density and reduce fracture incidence, estrogen replacement therapy is a conventional way to treat osteoporosis. However, studies have shown that the use of estrogen increases the risk of breast cancer, ovarian cancer, thrombotic stroke, and myocardial infarction [11][12][13]. Therefore, investigating the molecular mechanisms of estrogen-induced osteoblastic differentiation is of great significance because this may potentially inspire precise and targeted therapy for osteoporosis. MicroRNAs (miRNAs) are a class of small noncoding RNA molecules that govern gene expression at the posttranscriptional level [14]. Expression of miRNA is characteristically spatiotemporal and tissue specific, making them promising targets for precise treatment of various disease [15,16]. Several miRNAs have been demonstrated to have clear effects on various cancers and are expected to prevent the undesirable effects of conventional treatments. It was suggested that the miRNA replacement therapy in cancers might reduce blood cell reduction, diarrhea, and constipation, all of which could be caused by conventional chemotherapeutic drugs [17]. These implied that it is reasonable to explore the precise osteoporosis treatment from the view of microRNA regulation. Meanwhile, emerging evidence has shown that miRNAs are closely involved in regulating osteogenic differentiation of BMSCs [18]. For example, miR-10a, miR-21, and miR-96 have a positive effect on regulating osteogenic differentiation [19][20][21], while miR-103a, miR-200a, and miR-141 inhibit osteogenic differentiation [22,23]. Therefore, it is feasible to analyze the underlying mechanism of estrogen-induced osteogenesis from the view of microRNA regulation and explore the potential precise treatment for osteoporosis based on it. In this study, we established an in vitro model of E2induced osteogenic differentiation by using rat bone marrow mesenchymal stem cells. Microarray and gain-of-function experiments were performed to analyze the significance of miRNAs in E2-induced osteogenic differentiation. This study may provide the foundation for further work on miRNAs in estrogen-induced osteogenesis and may inspire new strategies to treat bone defects or osteoporosis. Materials and Methods This research was performed under the approval of the Institutional Animal Care and Use Committee of Sun Yat-sen University (Guangzhou, China). All experimental methods were performed in accordance with relevant guidelines and regulations. 2.1. Isolation, Culture, and Identification of rBMSCs. Male Sprague-Dawley rats (4 weeks old) were sacrificed by cervical dislocation. rBMSCs were isolated from bilateral femurs and tibias of the rats as described previously [24]. Immunophenotyping of rBMSCs was analyzed by flow cytometry using fluorescein isothiocyanate-conjugated antibodies which were purchased from eBioscience (California, USA). Alizarin Red Staining and Oil Red O Staining. Since the production of calcium nodules needs a long-term stimulation, the rBMSCs were cultured for 5, 9, 13, or 19 days and then subjected to alizarin red staining. rBMSCs cultured in a 6-well plate were washed with PBS and fixed with 70% ice-cold ethanol at 4°C for 1 hour, followed by 1% Alizarin Red (Sigma-Aldrich, Cat. No. A5533) staining for 10 minutes at room temperature. For oil red O staining, rBMSCs cultured in adipogenic differentiation medium for 7 days were washed with PBS and fixed with 4% paraformaldehyde for 15 minutes at room temperature, followed by staining in isopropanol solution of oil red O for 30 minutes at room temperature. Samples were observed under a phase-contrast microscope, and the images were acquired with a scanner. Three independent experiments were performed, and the quantification of the calcium nodules was analyzed by the ImageJ Software based on the whole-well image. 2.5. miRNA Microarray. Total RNA of the control cells (cultured in OM) and the E2-incubated rBMSCs (cultured in OM supplemented with 100 nM E2 for 13 days) was isolated with TRIzol (Invitrogen, Cat. No. 15596018, California, USA) and purified with the RNeasy Mini Kit (Qiagen, Cat. No. 74104, Hilden, Germany) according to the manufacturer's instructions. RNA quality and quantity were measured by a NanoDrop spectrophotometer (ND-1000, NanoDrop Technologies Inc.), and RNA integrity was determined by gel electrophoresis. The miRCURY™ Hy3™/Hy5™ Power Labeling Kit (Exiqon, Cat. No. 208035, Denmark) was used for miRNA labeling according to the manufacturer's guideline. After the labeling procedure, the Hy3™-labeled samples were hybridized on the miRCURY™ LNA Array V19.0 (Exiqon) according to the array manual. Then, the slides were scanned by the Axon GenePix 4000B Microarray Scanner (Axon Instruments, USA). Scanned images were then imported into the GenePix Pro Software V6.0 for grid alignment and data extraction and normalization. A median normalization method was used. Differentially expressed miRNAs between two groups were identified with the fold-change thresholds of >1.5 or <0.67. Finally, a hierarchical clustering was performed to show the distinguishable miRNA expression profiling between these two groups. The heat map of the differentially expressed miRNAs was made by the Mev clustering software based on the normalized intensities of each group [25]. The pathway analysis was performed by the Nimble Scan V2.5 (Roche NimbleGen Inc., Madison, WI, USA). 2.8. Statistical Analysis. SPSS statistical software V20.0 was used for data analysis. The data were presented as the mean ± standard error of the mean. Statistical significance was analyzed with one-way ANOVA. p < 0 05 was considered statistically significant. Identification of Rat Bone Marrow Mesenchymal Stem Cells (rBMSCs). On the third passage, the isolated rBMSCs displayed rapid proliferation and a fibroblast-like appearance. A flow cytometry assay revealed that the rBMSCs were positive for mesenchymal marker CD29 and stem cell marker CD90 but were negative for myelogenous markers CD11b/c and CD45 ( Figure S1A). Next, the isolated rBMSCs were analyzed for their capacity of multidirectional differentiation, since rBMSCs are capable of differentiating into osteogenic or adipogenic lineages. As expected, rBMSCs that were cultured in the osteogenic differentiation medium (OM) displayed significant calcium deposits, which indicate osteogenic differentiation, compared with those cultured in complete medium (CM) ( Figure S1B). Furthermore, rBMSCs that were cultured in adipogenic differentiation medium accumulated lipid droplets, indicating the adipogenic differentiation ( Figure S1C). The Viability of rBMSCs Cultured with E2. To explore whether E2 would affect the viability of rBMSCs, the cells were treated with a range of concentrations of E2 (0 nM, 1 nM, 10 nM, 100 nM, 500 nM, and 1 μM) for 24-48 h and then CCK-8 assays were performed. No significant differences in toxic effect were observed among these concentrations ( Figure 1). E2 Induced the Osteogenic Differentiation in rBMSCs. To confirm the effect of E2 on osteogenic differentiation, the rBMSCs were treated with different doses (1 nM, 10 nM, and 100 nM) of E2 for 5, 9, or 13 days, followed by immunoblotting analysis for the expression of osteogenesis-related proteins, including ALP, RUNX2, OCN, and OPN. As shown in all three incubation periods, E2 treatment resulted in a significant increase of ALP, RUNX2, OCN, and OPN levels consistent with the series of E2 concentrations ( Figure 2). The effect of E2 on the production of calcium nodules was further investigated. The alizarin red staining assay revealed that after being cultured in OM for 13 and 19 days, rBMSCs exhibited significant calcium nodule formation, which was not observed in cells cultured in CM. Moreover, E2 addition to OM further promoted the production of calcium nodules in rBMSCs in a dose-dependent manner ( Figure 3 and Figure S2). Taken together, the above results indicate that E2 enhanced the osteogenic differentiation of rBMSCs. In order to explore the potential mechanism of E2induced osteogenic differentiation, the expression of BMP2 was detected in the early stage of E2 stimulation. As shown in Figure 4, the expression of BMP2 was increased with 10 nM and 100 nM E2 stimulation for 5 days. After the stimulation of E2 for 9 days, the expression of BMP2 was also increased in the 1 nM, 10 nM, and 100 nM group. Therefore, BMP2 was involved in the regulation of E2-induced osteogenic differentiation. E2-Induced Osteogenesis Involved Altered miRNA Expression. To investigate the miRNA expression in E2induced osteogenic differentiation in rBMSCs, the total RNAs were extracted from E2-treated or nontreated cells for microarray screening. Among the 700 miRNAs represented on the chip, 29 were differentially expressed in response to E2 treatment ( Figure 5(a)). Most of these miRNAs (21/29) were downregulated, while 8 of them were upregulated compared with the control group during this time frame (Table 1 and Table S1). Consistently, the majority (19/29) of these miRNAs have been categorized as related to osteogenic differentiation in previous studies (Table S2). Potential target genes of these 29 miRNAs were predicted using the databases Microcosm, Miranda, and Mirdb. The genes that were identified as targets of the 29 miRNAs by all three databases were further subjected to the pathway enrichment analysis. The KEGG pathway analysis showed that the predicted target genes were enriched in ten signaling pathways ( Figure 5(b)). Among these pathways, JAK-STAT signaling, PI3K-AKT signaling, and calcium signaling have been proven to be closely related to osteogenesis or bone metabolism [26][27][28]. These results implied that miRNAs may be involved in the regulation of E2-induced osteogenesis. We selected 3 miRNAs (let-7b, miR-25, and miR-30b) to confirm their alteration induced by E2, using a quantitative PCR assay. As shown, the expression levels of let-7b and miR-25 were upregulated 1.5-fold, while the expression level of miR-30b dropped by 58%, compared with the control groups ( Figure 5(c)). These results suggest that E2 promoted the expression of let-7b and miR-25 and decreased the miR-30b expression. 3.5. miR-30b Regulated the E2-Induced Osteogenesis. miR-30b was chosen for an investigation of its effect on E2induced osteogenic differentiation. Immunoblotting assay revealed that overexpression of miR-30b markedly attenuated the expression of E2-induced osteogenesis-related proteins, including ALP, RUNX2, OCN, and OPN, while knockdown of miR-30b significantly increased the expression of these proteins (Figures 6(a) and 6(b)). Furthermore, alizarin red staining revealed that the production of a mineralized nodule in E2-treated rBMSCs was significantly suppressed by miR-30b overexpression, but it was apparently restored by miR-30b silencing (Figure 7). These data indicated that miR-30b may negatively regulate E2induced osteogenesis. Discussion Hormone replacement therapy (HRT) is a common treatment for osteoporosis. However, its long-term use is restricted by potential complications [29,30]. Understanding the cellular and molecular mechanisms of the estrogeninduced effects on osteoporosis may provide novel and precise treatments, which could avoid these systemic side effects. miRNAs are considerably small molecules and their expression is strictly spatiotemporal and tissue specific [31], making them promising targets for the precise treatment of various diseases. Our study revealed the miRNA expression profile during estrogen-promoted osteogenic differentiation and explored the role of miR-30b in this process, indicating that miRNA-targeted treatment could be a new strategy for osteoporosis therapy. According to previous studies, the concentration of E2 is used to induce osteogenic differentiation ranging from 0.1 to 100 nM [32][33][34]. Moreover, the cytotoxicity test of E2 revealed that the concentration of E2 from 1 nM to 1 μM was nontoxic to BMSCs. Therefore, we choose 1 nM to 100 nM E2 in this study. According to our results, 100 nM E2 has been found to have an obvious osteogenic effect on BMSCs. let-7b, miR-25, and miR-30b were selected to validate the microarray results via a quantitative PCR assay. The reasons for selecting these miRNAs to investigate the effect on E2induced osteogenic differentiation are as follows: (1) The expression levels of these miRNAs are relatively high in rBMSCs according to the intensity values detected by microarray analysis (Table 1), which facilitates verification by quantitative PCR assay. (2) Among the 29 miRNAs, the alterations in expression of these miRNAs are the most significant. (3) It has been suggested that these miRNAs were closely related to mineralization [23] and osteogenesis [18]. One target gene of miR-30b is RUNX2, which is an important osteogenic differentiation marker and was found to be upregulated by E2 in the present study. Hence, miR-30b was more likely to regulate E2-induced osteogenic differentiation and was therefore selected to investigate its effect. Both the let-7 family and miR-25 have been found to be related to osteogenesis in a previous study. For example, the let-7 family was able to enhance the osteogenesis and repress the adipogenesis of human stromal/mesenchymal stem cells [42], and miR-25 played a role in regulating osteoblast differentiation in the osteoblast-like line MG-63 [43]. However, whether they were involved in the regulation of E2-induced osteogenic differentiation was still unknown. Our study provided a first clue that both let-7b and miR-25 play a positive role in E2-induced osteogenesis in rBMSCs. The expression of miR-30b was decreased in E2-induced osteogenesis. It has been reported that BMP2 promoted vascular smooth muscle cell (VSMC) calcification by downregulating miR-30b and miR-30c expression [44]. The process of vascular calcification is highly similar to physiological mineralization, which consists of the degradation of pyrophosphate by alkaline phosphatase and the deposition of hydroxyapatite crystals on the collagen-rich matrix [45,46]. Our data further implied that miR-30b negatively regulated E2-induced osteogenesis by interaction with RUNX2 ( Figure 8). Meanwhile, Runx2 is one of the downstream factors of the BMP2 pathway. Therefore, E2 is supposed to promote osteogenesis via BMP2/miR-30b/Runx2 signaling. In addition, the expression of ALP, OCN, and OPN was inhibited by miR-30b. ALP, OPN, and OCN were not predicted as the target genes of miR-30b; therefore, the expression of these three proteins might not have been regulated by miR-30b directly but rather by other factors or molecular pathways. As a transcription factor, RUNX2 transactivates the expression of OPN and OCN [47][48][49]. Therefore, miR-30b may inhibit the expression of OPN and OCN via RUNX2. On the other hand, miR-30b can inhibit autophagy by directly targeting beclin1 (BECN1) and autophagy protein 5 (ATG5) [50]. Recent studies have found that autophagy promotes osteogenic differentiation of MSCs [6,51]. Therefore, autophagy may be another possible mechanism underlying the inhibition of osteogenic differentiation by miR-30b. However, this speculation requires more investigation for confirmation. Conclusions In conclusion, our study demonstrates that E2 can effectively promote osteogenic differentiation of rat BMSCs and can provide an insight into the potential contribution of miRNAs to E2-induced osteogenesis. These findings inform us that miR-30b can be a possible therapeutic target to treat osteoporosis. Further in vivo experiments are needed to support its application. Data Availability The data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare that there is no conflict of interest regarding the publication of this paper. Supplementary Materials Supplementary 1. Table 1: detailed data of the altered miR-NAs. The fold change (E2/control), basal intensity values, and the normalized data of the differentially expressed miRNAs are shown. The fold change = normalized data in the E2 group/normalized data in the control group. Figure 1: identification of rBMSCs. (a) Flow cytometry assay shows the surface markers of rBMSCs. Isotype controls are presented as red plots, and the specific cell surface markers are presented as blue plots. The isolated rBMSCs were positive for CD29 and CD90, both of which are the markers of rBMSCs, but were negative for myelogenous makers CD11b/c and CD45. (b, c) The multidirectional differentiation abilities of rBMSCs. (b) The alizarin red staining showed that rBMSCs cultured in the osteogenic differentiation medium displayed significant calcium deposits compared with the control group, and (c) the oil red O staining showed that rBMSCs cultured in adipogenic differentiation
4,006.8
2019-04-04T00:00:00.000
[ "Biology" ]
The computational design of junctions by carbon nanotube insertion into a graphene matrix Using first-principles density functional theory calculations, two types of junction models constructed from armchair and zigzag carbon nanotube (CNT) insertion into a graphene matrix have been envisioned. It has been found that the insertion of the CNT into the graphene matrix leads to the formation of C–C covalent bonds between graphene and the CNT that distort the CNT geometry. However, the hydrogenation of the suspended carbon bonds on the graphene resumes the graphene-like structure of the pristine tube. The calculated band structure of armchair CNT insertion into graphene or hydrogenation graphene opens up a band gap and converts the metallic CNT into a semiconductor. For the zigzag CNT, the sp3 hybridization between the graphene and nanotube alters the band structure of the tube significantly, whereas saturating the dangling bonds of terminal carbon atoms of graphene makes the CNT almost keep the same character of the bands as that in the pristine tube. The synthesis of our designed hybrid structures must be increasingly driven by an interest in molecules that not only have intriguing structures but also have special functions such as hydrogen storage. 2 CNTs depends sensitively on the precise way a graphene layer is rolled up into the tubular structure, and is identified by the chiral index (n, m) [4]. Combined with their ballistic electronic transport characteristics [5], this opens exciting possibilities for the designing of novel electronic components [6]. Furthermore, the functionalization of CNTs might lead to new opportunities in carbon-based nano-electronic devices [7]- [13]. Recently, the rise of 2D graphene science has been accelerated by the new technological expectations that have emerged in the field of carbon-based nanoelectronics [14]- [16]. As a 2D atomic crystal [17], graphene has triggered tantalizing enthusiasm for new discoveries [18,19]. It is found that the chiral nature of charge excitations in 2D graphene has been shown to result in unconventional quantum transport features, such as unusual quantum Hall effects [20,21]. To construct logic circuits based on graphene and CNT units, it is necessary to join them in particular ways. One possible method to connect them is by using irradiation by energetic electrons [22] or atoms [23] in order to introduce topological defects such as pentagons, heptagons and octagons into a perfectly hexagonal lattice. Subsequently, the graphene and CNTs could be connected by the formation of new C-C bonds at the interface. Alternatively, chemical reactions could be applied to form covalent linkages between graphene and CNTs, for this method has been attempted to fabricate CNT junctions between various CNTs with different diameters [24,25]. First-principles calculations have validated that linear, T-and H-shaped junctions within the connection modes between CNT and graphene nanoribbon units could be constructed [26]. Moreover, the simulation results of non-equilibrium Green's function predicted that the proposed models had potential applications in nanoelectronics. In this paper, we use first-principles calculations to probe the possibility of building basic CNT-graphene junctions via the covalent attachment of a single layer graphene to the sidewall of CNTs. The main motivation of this work is to establish a basic physical picture of junction construction between a CNT and graphene by the insertion of a tube into a graphene matrix. We chose the embedding medium as the 8 × 8 graphene supercell so as to minimize the tube-tube strain presented in the neighboring supercells. Another underlying condition for reduced strain is the selection of CNT diameters such that the CNT-graphene distances are consistent with typical van der Waals values. For these reasons, we choose the armchair (5,5) CNT and zigzag (8,0) CNT as the candidates for our study. We find that the placement of a CNT in the graphene matrix leads to the formation of C-C covalent bonds between the graphene and the sidewall of the CNT that distort the CNT geometry. In particular, the interaction of hydrogen atoms with the suspended carbon bonds on the graphene leads the CNT to resume its pristine graphenelike structure. Since graphene introduces a partial sp 3 character into the sp 2 graphitic network of (5,5) tube, it opens up a band gap and converts the metallic (5,5) CNT to a semiconductor. For the semiconductor (8,0) CNT, the sp 3 hybridization between the graphene and nanotube induces impurity states on the Fermi level, whereas hydrogenation on the suspended C atoms of graphene narrows down the range of its fundamental band gap. Our study is based on the first-principles plane-wave pseudopotential density functional theory as implemented in the CASTEP code [27]. For the exchange and correlation term, the generalized gradient approximation (GGA) is adopted, as proposed by Perdew-Burke-Ernzerhof (PBE) [28]. We use ultrasoft pseudopotentials [29] for the carbon atoms and a plane-wave cutoff of 300 eV. The Brillouin zone integration is performed within the Monkhorst-Pack scheme [30]. Considering the larger number of atoms in the supercell and expensiveness of the calculation time, only the point is used for the Brillouin zone sampling during the structural optimization and more K points along the tube axis are considered in the 3 calculation of the band structure. Optimal atomic positions are determined until the magnitude of the forces acting on all atoms becomes less than 0.01 eV Å −1 . By this criterion, the geometry is optimized within 10 −3 Å, leading to a convergence of the total energy within 10 −5 eV. In addition, a finite basis set of corrections are also included. Calculations have been carried out within the periodically repeating supercell geometry because of the necessity of using periodic boundary conditions with the plane-wave method. We use an 8 × 8 hexagonal graphene supercell with lattice constants a sc , b sc , and c sc . The lattice constants a sc and b sc are chosen such that the interaction between the nearest neighbor tubes is negligible (the minimum C-C distance between the nearest neighbor tubes is taken as 9.84 Å). The lattice constant along the axis of the tube c sc is taken to be equal to four times (for an armchair tube) and twice (for a zigzag tube) that of the 1D lattice parameter c of the tube, insuring no interaction between the neighboring graphene layers. The tube axis is taken along the z-direction, and the circular cross section lies in the (x, y)-plane. We started with the built supercell, which had the carbon atoms removed from the center and was suitable for placing an armchair (5,5) CNT or a zigzag (8,0) CNT at the center hole. Figures 1(a) and (b) show the imaginary picture of a 3D network based on tube insertion into a graphene matrix. The initial configurations for a (5,5) CNT and an (8,0) CNT are at the center of a hollow of graphene. This 3D network could be the prototype of next-generational carbon-based microelectronic circuits: the basic logic circuits are constructed by the CNT, and the sheets of circuits are bridged by graphene. Allowing for ionic relaxation of the initial structures results in the formation of a number of C-C bonds between graphene and the CNT, as shown in figure 2. The (5,5) and (8,0) nanotubes in the relaxed configurations are anchored at many sites. As is evident from figures 2(a) and (c), the formation of the C-C bridges is accompanied by large distortions of the nanotube. A range of bond lengths and angles is found for the C-C bridges in the distorted configuration of (5,5) CNT in figures 2(a) and (b). The C-C bond lengths for these bridges range between 1.488 and 1.601 Å. A larger variation is found for the C-C-C angles that range between 103.4 • and 130.4 • . The formation of numerous C-C-C bridges also results after ionic relaxation when an (8,0) CNT is placed in the graphene hollow. Like in the (5,5) CNT case, the formation of these bridges results in significant CNT distortions, albeit of a lesser degree as compared to the (5,5) CNT, due to the difference in diameter. The C-C bond lengths for the bridges between graphene and the (8,0) tube range between 1.509 and 1.578 Å. The variation for the C-C-C angles of the bridges in the (8,0) case ranges between 105.3 • and 134.8 • . The average length of C-C bonds on the bridge in both cases is longer than the perfect C-C bond length of 1.42 Å in a CNT or graphene [4]. This indicates that the π electrons of the C-C bonds on bridges between the tube and graphene are not strictly localized on such bonds, but tend to delocalize over the nearby graphene and tube units. The large distortion on the sidewall accompanying the functionalization of graphene, however, is an undesirable effect. In order to take advantage of the encapsulation without distortion caused by CNT-graphene bonding, we investigated the effect of hydrogenation on the carbon atoms at the edge of the graphene hole. We allowed for H atoms to saturate the carbon atoms at the edge of the graphene hole in the initial un-relaxed structure and then re-relaxed it. A single hydrogenation step of this kind results in the breakup of a C-C bridge between the graphene and the tube. As seen in figures 3 and 4, one end of the bridge relaxes back towards the graphene matrix with CH groups, and the other C atom of the bridge relaxes back to assume a configuration closer to the pristine tubular structure of the CNT. Tsetseris et al [31] investigated the CNT-SiO 2 interface of an embedded CNT using first-principles calculations. Their results showed that strong Si-O-C bonds were formed and subsequent hydrogenation eliminated all the Si-O-C bonds, which proves that our relaxed structure after hydrogenation is reasonable. Having established that graphene forms strong covalent bonds with nanotubes and hydrogenation eliminates the C-C bond between the graphene and the CNT, we next investigate the effect of functionalization on the electronic structure of CNTs. The band structures for the isolated (5,5) CNT, (5,5) CNT suspended graphene and (5,5) CNT suspended hydrogenation graphene are shown in figure 4. Figure 4(a) shows that the armchair (5,5) CNT is metallic, due to the fact that there are two energy bands near Fermi energy and the two bands cross with the Fermi level at k ≈ 2/3 of the Brillouin zone, which is required by symmetry and suggested by the band-folding theory [32]. The most interesting case comes when considering the (5,5) CNT suspended by graphene or hydrogenation graphene, where the pristine CNT is metallic. Here, the addition of graphene or hydrogenation graphene on the sidewall of the CNT reduces the symmetry and introduces additional couplings between the conduction band and valence band states of the CNT that open an energy gap of about 0.2 eV near the original Fermi level. Due to the isoelectronic character of the graphene and CNT, the Fermi level remains in the midgap region and the CNT is thus a semiconductor (figures 4(b) and (c)). The functionalization of graphene or hydrogenation graphene on the sidewall of the CNT, therefore, seems to provide a new and simple route to the interconnection of CNTs that would be semiconducting: if their chirality were such that they were metallic, the graphene or hydrogenation graphene encapsulation would turn them into semiconductors. Further calculations on the band structure of additional graphene or hydrogenation graphene on the sidewall of the metallic armchair (4,4) CNT also show a similar trend of gap opening. The band structure E (k) of the pristine (8,0) zigzag CNT is shown in figure 5(a) as a representative for other semiconducting nanotubes. Our calculations show that the (8,0) CNT has an energy gap of 0.65 eV, which is consistent with the result of local density approximation (LDA) calculations by Blasé et al [33]. The effects of graphene and hydrogenation graphene on the electronic structure of the (8,0) CNT are shown in figures 5(b) and (c), respectively. In figure 5(b), the bands for the configuration in figure 2(c) or (d) have small dispersion and are strikingly different than those of figure 5(a), confirming that the creation of C-C-C bridges alters the CNT electronic properties significantly. We find that the sp 3 hybridization between the graphene and nanotube induces impurity states on the Fermi level of the (8,0) tube. By encapsulating the graphene on the sidewall of the (8,0) semiconducting tube, the existence of a band-gap would not be preserved. Our further calculations on the band structure of the (9,0) CNT encapsulated graphene on its sidewall show that the functionalization of graphene enlarges the energy gap of the pristine tube from 0.08 to 0.85 eV. Furthermore, the energy gap of the (9,0) CNT now is decided by the energy value of HOMO and LUMO corresponding to the Z point in the Brillouin zone. It proves that the bands for the semiconductor zigzag CNT encapsulated graphene are strikingly different than those of the pristine one. On the other hand, comparing figure 5(a), we conclude that the bands of the (8,0) zigzag semiconductor CNT encapsulated hydrogenation graphene have a similar character as that of the (8,0) pristine tube. However, the functionalization of hydrogenation graphene has narrowed down the range of its fundamental band gap, which is reduced in numerical value (0.28 eV) with respect to its pristine CNT analogue (0.65 eV). In spite of the energy gap of the tube having minor changes, our further calculation of the band structure of the (9,0) CNT encapsulated hydrogenation graphene proves that the functionalization by addition of hydrogenation graphene to its sidewall almost keeps the character of the bands as that in the (9,0) CNT. Tsetseris et al [31] found that hydrogenation eliminated all the Si-O-C bonds, leading to floating CNTs in SiO 2 with electronic properties very close to those of pristine CNTs in a vacuum. This may be the reason that SiO 2 is an insulator and would not alter the electronic property of the CNT. Our results show that the graphene on the sidewall of the semiconductor zigzag CNT has a great impact on the electronic structure of the CNT and hydrogenation on the dangling bonds on graphene significantly reduces the impact. In summary, using first-principles DFT calculations, the geometries and electronic properties of junctions constructed by a CNT insertion graphene matrix have been investigated systematically. We find that the insertion of the CNT into the graphene matrix leads to the formation of C-C covalent bonds between graphene and the CNT that distort the CNT geometry. However, the hydrogenation of the suspended C bonds on the graphene resumes the graphene-like structure of the pristine tube. The calculated band structure of an armchair CNT encapsulating graphene or hydrogenation graphene opens up a band gap and converts the metallic CNT into a semiconductor. For a zigzag CNT, the sp 3 hybridization between the graphene and nanotube alters the band structure of the tube significantly, whereas the functionalization by the addition of hydrogenation graphene on its sidewall almost keeps the character of the bands as that in the pristine CNT if ignoring the minor changes in the energy gap. Our model suggested by successfully computational design supplies a novel hybrid composite nanostructure CNT-graphene. It was recently shown that incorporating CNTs within a metal-organic framework enhances its surface area, stability and hydrogen uptake capacity [34,35]. The composite nano-material we studied here shows an enhanced surface area in a 3D space, which could be due to an increase in the hydrogen storage capacity at room temperature [36]. Our computational design also presents a new direction for achieving novel hybrid materials between the two spotlighting materials, viz. CNTs and graphene [37]. We note that organic synthesis will be increasingly directed to producing bio-inspired and newly designed molecules [38], such as DNA sequence motifs for structure-specific recognition and separation of CNTs [39]. The synthesis of our studied hybrid structures must be increasingly driven by an interest in molecules that not only have intriguing structures, but also have special functions.
3,703
2009-09-01T00:00:00.000
[ "Physics" ]
A Rest-frame Near-IR Study of Clumps in Galaxies at 1 < z < 2 Using JWST/NIRCam: Connection to Galaxy Bulges A key question in galaxy evolution has been the importance of the apparent “clumpiness” of high-redshift galaxies. Until now, this property has been primarily investigated in rest-frame UV, limiting our understanding of their relevance. Are they short-lived or are they associated with more long-lived massive structures that are part of the underlying stellar disks? We use JWST/NIRCam imaging from the Cosmic Evolution and Epoch of Reionization Survey to explore the connection between the presence of these “clumps” in a galaxy and its overall stellar morphology, in a mass-complete ( logM*/M⊙>10.0 ) sample of galaxies at 1.0 < z < 2.0. Exploiting the uninterrupted access to rest-frame optical and near-IR light, we simultaneously map the clumps in galactic disks across our wavelength coverage, along with measuring the distribution of stars among their bulges and disks. First, we find that the clumps are not limited to the rest-frame UV and optical, but are also apparent in near-IR with ∼60% spatial overlap. This rest-frame near-IR detection indicates that clumps would also feature in the stellar-mass distribution of the galaxy. A secondary consequence is that these will hence be expected to increase the dynamical friction within galactic disks leading to gas inflow. We find a strong negative correlation between how clumpy a galaxy is and strength of the bulge. This firmly suggests an evolutionary connection, either through clumps driving bulge growth or the bulge stabilizing the galaxy against clump formation, or a combination of the two. Finally, we find evidence of this correlation differing from rest-frame optical to near-IR, which could suggest a combination of varying formation modes for the clumps. INTRODUCTION Over the last two decades, extremely deep high resolution data, courtesy mainly of the Hubble Space telescope, revealed high redshift star-forming galaxies to be much more clumpy than their low redshift counterparts (Conselice et al. 2004;Elmegreen & Elmegreen 2005;Elmegreen et al. 2008;Förster Schreiber et al. 2011;Guo et al. 2015Guo et al. , 2018;;Shibuya et al. 2016).This has sparked a debate about the consequences of these structures on Corresponding author: Boris S. Kalita<EMAIL_ADDRESS>Kavli Astrophysics Fellow galaxy evolution.Mainly observed in rest-frame UV and optical wavelengths, the origin and evolution of these 'clumps' within star-forming galaxies are far from well understood. One of the most consequential questions, concerning the morphological evolution of galaxies, is whether these are simply very low-mass features only observed in restframe UV and optical, or massive enough to contribute to the total stellar-mass tracing rest-frame near-IR light.Some simulations suggest that they are disrupted on short timescales (≲ 50 Myr), thus having minimal effect on the underlying stellar distribution (Murray et al. 2010;Tamburello et al. 2015;Buck et al. 2017;Oklopčić et al. 2017).However, other simulations suggest that clumps survive for much longer and are expected to be crucial drivers of bulge growth through dynamical friction and gravitational torques (Bournaud et al. 2011(Bournaud et al. , 2014;;Elmegreen et al. 2008;Ceverino et al. 2010;Mandelker et al. 2014Mandelker et al. , 2017)).They could also be associated with more massive structures that HST-based rest-frame UV observations miss (Faure et al. 2021). Before the James Webb Space Telescope (JWST) era, observations were limited to star-formation tracing restframe UV and optical light, mainly using HST (e.g., Guo et al. 2015Guo et al. , 2018;;Sattari et al. 2023).We did not have high resolution capabilities in rest-frame near-IR at z > 1 to map the underlying stellar distribution on the relevant spatial scales (∼ 1 kpc).With the highly sensitive and high resolution rest-frame optical (better than HST) and near-IR capacities of JWST, we can finally undertake a relatively more complete study.It is now possible to extend the resolved imaging of these clumpy galaxies well into rest-frame near-IR, thereby accessing the bulk of the stellar light. In this work, we use the capabilities of JWST/NIRCam to make the first resolved maps of clumps within galaxies in both the rest-frame optical and near-IR for galaxies at z > 1.It should be noted that the term 'clumps' has until now been used to refer to marginally resolved or un-resolved structures and are expected to be a result of gravitational instabilities.These conclusions are based on in-depth investigations conducted primarily in rest-frame UV.The structures detected in this work are at rest-frame optical and near-IR, and albeit we expect them to be associated with the previously studied UV-detected clumps (Sec.4.1), the exact connection will be tackled in a follow-up paper (Kalita et. al. in prep).Nevertheless, we still use the same term 'clumps' in this work as a general reference to structures giving galaxies a clumpy appearance.We are not implying that they necessarily share the same properties as the structures previously studied. To make a parallel assessment of the general stellar morphology of each galaxy, we exploit the longest wavelength NIRCam band (F444W) to obtain a bulge-todisk flux ratio.This value closely traces the corresponding stellar-mass distribution due to a minimally varying mass-to-light ratio at such high a wavelength (Bell et al. 2003;Zibetti et al. 2009;Schombert et al. 2019).Throughout, we adopt a concordance ΛCDM cosmology, characterized by Ω m = 0.3, Ω Λ = 0.7, and H 0 = 70 km s −1 Mpc −1 .We use a Chabrier initial mass function.All images are oriented such that north is up and east is left. We require simultaneous coverage of rest-frame optical as well as near-IR light, along with similar physical to angular scales (to not introduce systematic biases) across our sample.Moreover, the galaxies should preferentially be at z > 0.5, where most studies in literature begin finding a significant population of clumpy galaxies (Guo et al. 2015).To satisfy each of these requirements, we settle on a redshift range of 1.0 < z < 2.0. For the sample selection itself, we use the catalogue for the EGS-HST field (Stefanon et al. 2017, S17) created with an extremely large wavelength coverage (0.4 − 8.0 µm).Its stellar-mass completeness including a maximum dust attenuation value, A v = 3.0 mag, is set at log(M * /M ⊙ ) = 9.5 for 1.0 < z < 1.5 and log(M * /M ⊙ ) = 10.0 for 1.5 < z < 2.0.For the sake of convenience, we limit our analysis to galaxies above log(M * /M ⊙ ) = 10.0 over the whole redshift range of our sample.This method leaves us with a total of galaxies 412 galaxies.However, we do provide the results for galaxies with log(M * /M ⊙ ) = 9.5 − 10.0 at 1.0 < z < 1.5 in the Appendix. Stellar morphology: bulges and disks We begin with an assessment of the stellar light variation by measuring the bulge-to-disk flux ratio of each galaxy.To ensure that this reflects the underlying stellar-mass distribution, we use the longest wavelength NIRCam filter available to us, F444W.For our redshift range, we expect a corresponding variation of massto-light ratio ≲ 25 − 30% (Schombert et al. 2019;Zibetti et al. 2009;Bell et al. 2003) with color.This would have increased had we used shorter wavelengths, thereby complicating the interpretation of our results.Therefore, this decision to use the longest wavelengths is aimed at using an 'almost' color-invariant tracer of the stellar distribution.An additional advantage of this approach is the sensitivity to highly obscured cores/bulges that we expect to find at high-z (e.g., Kalita et al. 2022), since attenuation is minimal2 in rest-frame near-IR. We measure the flux of the bulge and the rest of the galaxy by fitting our sample with a dual component model: two Sérsic profiles with fixed indices of n = 1 for the disk and n = 4 for the bulge.While we use this parameter settings to obtain the results presented throughout this work, we use also redo our measurements with a Sérsic index of 2 (associated with a pseudobulge; Gadotti 2009) in place of 4 for the (classical) bulge and find only negligible differences (well within the uncertainties).Moreover, the use of n = 4 is driven by our choice to not obtain a perfect fit on an individual galaxy level but rather achieve a uniform determination of the flux across our sample (this being a widely used approach, e.g., Simard et al. 2011;Meert et al. 2015;Bottrell et al. 2017aBottrell et al. ,b, 2019)).Finally, it also allows us to broadly separate the disk and bulge. For each object, we make cutouts of dimensions 101 × 101 pixels (3 ′′ ×3 ′′ ) of the background subtracted F444W images.We also create corresponding cutouts of the available weight maps from CEERS, that is used as the noise maps in the fitting procedures.Finally, the PSF for the fitting is generated by median-stacking 7 unsaturated stars we found within the field-of-view of the observations.The fitting is done using a python-based package GALIGHT3 (Ding et al. 2022), which in turn implements the forward-modelling galaxy image fitting tool LENSTRONOMY (Birrer & Amara 2018).The approach allows us access to the full posterior distribution of each fitted parameter.This fitting is then op-timised through a two-stage Particle Swarm Optimizer (Kennedy & Eberhart 1995, PSO), which is finally fed into a Markov Chain Monte Carlo (MCMC) fitting as the initial condition.We are then left with the bestfit parameter as well as their respective 1σ confidence intervals. Finally, we obtain three morphological flux ratios from this analysis: the bulge-to-disk (using the fluxes of the two components), bulge-to-total (where the total flux is given by the sum of the disk and residual flux) and diskto-total.We will mainly be using the bulge-to-disk ratio throughout this work, but will refer to the other two to in order to further our understanding of the results (in Sec.4.2) Quantifying the clumpiness The aim of the second segment of our analysis is to quantify the flux from clumps that one can observe by visual inspection in the highly sensitive JWST/NIRCam images of the sample galaxies.As we shall be making comparisons across the wide wavelength coverage of the data (from F115W to F444W), we begin by matching the PSF sizes to that of the longest wavelength and therefore the lowest resolution filter (F444W).We obtain the PSFs of each filter as discussed in Sec.3.1.A Gaussian-model fitting gives us the PSF parameters which are used to calculate the effective σ of the Gaussian kernel that each image needs to be convolved with to match the resolution of the F444W data. We build an automated clump detection algorithm (Fig. 1) based on the method widely used for detecting clumps in galaxies in HST UV and optical bands (Conselice et al. 2003;Guo et al. 2015;Calabrò et al. 2019).We first smooth the measurement image (from F115W to F444W, but with a PSF equivalent to that of F444W), using a Gaussian filter of σ = 4 pixels. 4This smoothed image is subtracted from the original image, leaving behind a contrast map showing structures varying at scales similar to the difference of the F444W PSF and the convolution of it with the Gaussian used for smoothing.This difference is found to be ∼ 0.07 ′′ .It should be noted that this is not the size of the clumps.We then make cutouts from the measurement images corresponding to each source of dimensions 3 ′′ × 3 ′′ as done in the previous section.We then run a source detection using the python package PHOTUTILS (Bradley et al. 2022), also included in GALIGHT, with a threshold 5 at 5 σ/pixel.This method selects regions with peaked emission associated 5 There are two reasons we use a 5 σ detection threshold with PHO-TUTILS: Using thresholds that are lower results in large sections of the galaxies rather the clumps.Moreover, the robustness check carried out for each detection leads to the rejection of more than ≳ 60% of the sources at 3 σ and ≳ 45% at 4 σ.At 5 σ however, this rejection rate is around 20-30%.Above this threshold our sample starts shrinking considerably.Hence we settle at the 5 σ threshold. Total Galaxies Bulge+disk measurements clumpiness detected (> 68 % confidence) F115W F150W F200W F277W F356W with clumps within the galaxies (Fig. 1).We only consider clumps within the extent of each galaxy, which is determined by a source detection on the stellar-mass sensitive F444W image with a threshold of 2 σ (the red color map region in Fig. 1).Moreover, it is critical to not include the central bulge as a clump since both are interpreted as small-scale structures by our algorithm. Hence, we mask the central region 6 as shown in the top panel of Fig. 1. To ensure that our detection is robust, we carry out an additional test.For every single clump (associated with a continuous segmentation map source detected at 5 σ), we artificially replicate it at another random position (by replacing the flux originally there) within the galaxy image while ensuring that it is not positioned within the core or outside the galaxy edge.Then we re-run our clump detection algorithm and check if this new (fake) clump is detected.We repeat this process a 1000 times, and only allow a clump to be included in our final measurement for the galaxy if it is detected > 68% times.This method removes ∼ 20% of the original detections.The reason for the rejection includes the clumps being marginally detected, and not leading to a strong enough fluctuation within the underlying disks to be detected repeatedly.Finally, once we have the robustly detected clump map, we simply measure the net flux within them and divide it by the flux of the whole galaxy in the same filter (the region for which uses the previously used 2 σ threshold map from the F444W image, shown in red color map in Fig. 1), after masking the core. 7This gives us the fractional flux contained in 6 The central mask is determined by the extent of the central clump (using the contrast map) as seen in the segmentation map for F444W.The central position is determined by finding the point of minimum asymmetry in the F444W band, where the central bulge is the most dominant object.This is always found to be coincident with the bulge location in the bulge-disk decomposition fit. 7The reason we remove the core for the net flux measurement, is to remove any possible correlation between the clump flux fraction and the bulge measurements in Sec.3.1.This method is different from how 'clumpiness' is measured in most previous works, where they do not remove it.We do check however if our results change if we include the core, and we find that they do not.clumps, with respect to the rest of the galaxy without the core.We define this ratio as the clumpiness. Clumpiness = Σ Clump flux Galaxy Flux without core The uncertainty in the clumpiness is determined from the error in estimation of the flux of each detection during the artificial replication process.In cases where no clumps are detected, we add a point source within the galaxy with varying levels of fractional flux (−4.0 to 0.0 in log scale) compared to the net flux of the galaxy.For each fractional flux value, we repeat the process a 1000 times.This step allows us to estimate an upper limit based on the flux level below which we begin having no robust detections (< 68%). This process is repeated for each filter separately.Some of the results for F115W and F356W, approximately representing rest-frame optical and near-IR are shown in Fig. 3. Clumps in optical and near-IR We focus our study to galaxies that have detectable bulges and disks (i.e.those having robust bulge and disk measurements with < 50% uncertainty in both components).This choice removes any galaxy that is composed exclusively of a disk or of a spheroidal bulge8 .We also ensure a disk axis-ratio > 0.3.The latter condition is applied to prevent any biases due to high dust column density in a galaxy observed perfectly edge-on.Our selected sample constitutes 73% and 67% of the S17 galaxies (10.0 < log(M * /M ⊙ ) < 11.0) at redshift ranges 1.0 − 1.5 and 1.5 − 2.0 respectively (also evident from Table 1 and f tot in Fig. 4).This is in line with previous expectation of disk-galaxy fractions in massive galaxy samples up to z = 2 (van der Wel et al. 2014)9 . At z > 1, studies have usually concentrated on studying clumpiness in UV bands (below the 4000 Å break; Guo et al. 2015).We however concentrate on the restframe optical and near-IR bands to study the underlying stellar distribution, as all the JWST/NIRCam bands we use in our study are above the 4000 Å break (barring F115W for 1.5 < z < 2.0, the results for which we separately discuss later). We first present the results at rest-frame ∼ 1 µm as representative of the whole wavelength range covered in our study.We find that out of all galaxies in our study, 40 % and 41 % of them at redshift ranges 1.0 − 1.5 (using F200W) and 1.5 − 2.0 (F277W) respectively show at least some level of clumpiness within our detection limits.This feature is represented as a fraction f clumpy in Fig. 4. Our fractions are of course dependent on the de-tection threshold we set, and using a lower value might result in a higher fraction.Nevertheless as mentioned earlier, doing so gives us less reliable clumpiness.Moreover, this somewhat strict detection still gives us results in agreement with the expected percentage of galaxies featuring UV-clumps (Guo et al. 2015) at the respective mass and redshift ranges.Hence our conditions do not hinder the scientific relevance of our sample within the current literature framework. As can be deduced from Table 1, similar fraction of clumpy galaxies can be obtained across all filters spanning rest-frame optical and near-IR.We do note however that there is a drastic fall in the numbers for the F444W filter (not shown in Table 1).But it is most likely due to this being a factor ∼ 2 shallower than the others, in addition to the clumps being fainter at long wavelengths.We therefore drop this from our clumpiness estimations and limit our analysis up to F356W, still maintaining coverage of exclusively rest-frame near-IR light in at least one filter across our redshift range. The similarity of fraction of galaxies showing clumpiness across bands do not however inform us whether the same structures are being detected in each galaxy.Thus, we apply a method of quantification of the overlap between the clumpiness maps in rest frame optical and near-IR bands.We do so by finding the net area of all pixels that are detected in both the shortest (F115W) and longest (F356W) wavelength band used in our detection algorithm.This value is then normalised by the area of detected clumps in either of the two bands to give a percentage of associations for each.We find a value of 0.6±0.2 and 0.7±0.2 for rest-frame optical (F115W) and near-IR (F356W).This result strongly suggests that we are not looking at mutually exclusive clumps in separate bands. Our results indicate that a large fraction of the galaxies showing clumpiness in the optical and near-IR bands would also show UV-clumps, given the similarity of the fraction of galaxies showing clumpiness in our study and that in Guo et al. (2015).However, we do not attempt to make a comparison to ancillary HST/acs CANDELS data due to the images being a factor of ∼ 2 shallower than the JWST/NIRCam images we are using (F115W to F356W).Nevertheless, we observe that UV-clumps do appear in the F115W filter at 1.5 < z < 2.0, which traces UV light at this redshift range, for all galaxies showing optical and near-IR clumps. Finally, a Kolmogorov-Smirnov test on the complete sample of galaxies and those with non-zero clumpiness gives only a 4 % and 7 % cumulative probablilty (pvalue) of being from the same distribution.However, if we remove the clumpy galaxies from the full sample and remeasure the probabilty, we obtain p-values well below 1 %.We therefore interpret this result as an indication of clumpy galaxies being largely independent of galaxies with no clumps, in regards to their bulge-to-disk ratios. Bulge-to-disk ratio and clumpiness Figure 5. Bulge-to-disk ratio vs clumpiness at rest-frame wavelength ∼ 1 µm for the complete sample.For the redshift window 1.0−1.5 we use F200, while for 1.5−2.0we use F277.The Pearson correlation coefficient (r) for the plotted data is provided in the bottom-right corner.The grey points mark the upper-limit values for all sources without clumpiness but have robust bulge-to-disk ratio.These are not included in the estimation of r. This work exploits the ability to simultaneously map the clumps in rest-frame optical and near-IR, along with the near-IR bulge-to-disk flux ratio (that closely traces the stellar-mass ratio between these two components as discussed in Sec.3.1).As done in Sec.4.1, we only show the relation between these two components at ∼ 1 µm rest-frame as a representation of our sample (Fig. 5 and 6).For all other filters, which are found to corroborate our conclusions, please refer to the Appendix. Firstly, all galaxies with non-zero clumpiness values appear within our sample with robust bulge and disk measurements.Fig. 5 reveals clumpiness to decrease with increasing bulge-to-disk flux ratio.This negative correlation, as determined by the Pearson correlation coefficient 10 , is found to be moderate to strong, with a value of −0.50±0.05for the whole sample.This strength of correlation is also found to mostly hold if the sample is divided into two bins in redshift as well as stellar-mass (Fig. 6 along with Figs. 10 and 11 in the appendix), with a few exceptions likely due to low number statistics.We also note that the position of the clumpiness upper limits (in both Fig. 5 and 6) follow the same trend, with almost all of them appearing at the high bulge-to-disk end of the distribution.Finally, there does not seem to be any obvious evolution of this negative correlation 10 The uncertainty for the coefficient is measured through bootstrapping.We randomize the pairs of values for clumpiness and bulge-to-disk ratio, and re-measure the coefficient.This process is done a total of 1000 times and the standard deviation of the distribution of the coefficient is used as the uncertainty. with redshift in our sample within the current levels of uncertainty. We inspect if this trend is induced by the measurement method rather than an intrinsic correlation.As discussed in Sec.3.1, we already ensure that the bulge flux is not incorporated in any way in the measurement of the clumpiness11 .We do find that using the net-flux of the galaxy to normalise the clump flux, we still end up with a similar correlation.We also ensure that the disk axis-ratio (above the cut-off of 0.3) is not correlated to either of the two plotted parameters.Making the axisratio cut-off stricter (> 0.6), which removes 50% of our sample, actually improves the negative correlation (the coefficients become more negative by ∼ 40% in Fig. 5).We further check if this correlation is rather driven by a dependence on the total stellar-mass of the galaxy.While the bulge-to-disk ratio is found to be associated with the stellar-mass, the correlation is much weaker (r ∼ 0.14) than that with the clumpiness.We therefore claim that this is possibly a second-order effect.We also do not find any detectable correlation between clumpiness and stellar-mass (r ∼ 0.05). Finally, we investigate a dependence on integrated star-formation over the whole galaxy using the starformation rate (SFR) measurements from spectral fitting in S17.We compare them to the expected SFR for each galaxy had it been exactly on the star-forming main-sequence (Leslie et al. 2020).We re-create Fig. 5, with only the galaxies having measured SFR in S17 using a constant star-formation history model and with values within or above a 0.3 dex scatter of the mainsequence.This filtering is to ensure only confirmed star-forming galaxies are included.The results show no change in the observed correlation (r = −0.52 ± 0.08), and no dependence is observed with the distance of the galaxy from the main sequence (the ratio of SFR observed and that estimated based on the main-sequence).The corresponding figure in shown in the Appendix (Fig. 12). Although we primarily discuss the bulge-to-disk ratio here, replacing it with bulge-to-total ratio maintains the negative correlation, although saturating at ∼ 1 as expected.When we check the same for the disk-to-total ratio, we observe that there is a very weak positive correlation (r ∼ 0.10).Both these results suggest that the strong relation observed in Fig. 5 is mainly driven by the relation between the bulge dominance and clumpiness, with weak contributions from a parallel decline of the disk strength. Bulge-to-disk and clumpiness optical/near-IR ratios We further investigate the ratio of the clumpiness detected in rest-frame optical (F115W) and near-IR (F356W) as a tracer of an equivalent integrated color of the clumps observed in a galaxy.This value will be associated, but not identical to the color of individual clumps. We limit this to the 1.0 < z < 1.5 range to maximise the wavelength difference between the filters, while ensuring that F115W still traces the optical light and F356W probes the stellar-mass tracing near-IR flux.The latter would not be true at z > 1.5.We observe that the clumpiness ratio shows a moderate correlation with the bulge-to-disk ratio (r = 0.40 ± 0.09, Fig. 7).In other words, for a specific clumpiness in near-IR light sensitive to stellar-mass, those showing higher clumpiness in optical wavelengths show higher bulge-to-disk ratio and vice-versa.However, it should be also be noted that majority of our sample has a clumpiness ratio > 1, suggesting that most galaxies have more prominent clumps in shorter bands tracing younger stars and starformation.We provide an interpretation of this correlation in Sec.5.2. Clumpy galaxy fractions Our work showcases that clumpiness in high redshift (z > 1) galaxies is present across rest-frame UV to near-IR.We also find that there is 60 ± 20 % spatial overlap between the clumpy maps between these wavelengths.Especially given their presence in the stellar-mass tracing near-IR, we can conclude that these features do play a role in the morphological evolution of galaxies.It is therefore unlikely that we are only dealing with young short lived star-forming objects termed as UV-clumps, demonstrating the need of a more generalised classification of clumps.This also needs to accommodate the possibility of the clumps being associated with early stages of structures like spiral arms as suggested by the intensity maps in Fig. 1 and 3 (corroborated by detection of 'clumpy spirals' at z > 1, e.g., Elmegreen et al. 2009;Margalef-Bentabol et al. 2022). We first discuss possible mechanisms of the formation of these structures in lieu of the fraction of galaxies featuring non-zero clumpiness.Our ≳ 40% fractions are much higher than the fraction of galaxies expected to be undergoing major-mergers.The gas-rich major-merger fraction is a factor of 2 lower based on expectations from López-Sanjuan et al. (2013).Similarly, the same fraction from Lotz et al. (2011) is found to be lower (by a factor > 2 up to z = 1.5) for merger observability timescales of ≤ 2 Gyr and only comparable if the effects of mergers are observable over a ∼ 3 Gyr interval.We also compare to major-merger fractions from TNG50 and TNG100 simulations (Nelson et al. 2019) and find them to also be a factor > 1.5 lower that the our clumpy galaxy fraction, across the stellar-mass range and up to a merger detectability timescale of 1.5 Gyr. However, the fractions based on minor-mergers (Lotz et al. 2011) and violent disk instabilites (Cacciato et al. 2012) do agree with our estimation of galaxies showing clumpiness (for expectations from each scenario, check Guo et al. 2015).This suggests that a large fraction of the clumpy galaxies are likely experiencing violent disk instabilities and/or minor-mergers. Physical interpretation based on simulations The negative correlation observed between the clumpiness and bulge-to-disk flux ratio (Fig. 5) may indicate an underlying physical connection.This trend is especially important since we expect this ratio to closely trace the associated mass-ratio due to a minimally varying mass-to-light ratio (Sec.3.1).Furthermore, given that the bulges of galaxies are known to be redder than the disks, either due to high dust obscuration or older stellar populations, the negative correlation would actually get stronger if we considered higher than expected mass-tolight ratio variations. We now compare this to theoretical expectations to suggest a physical interpretation of this negative correlation.Multiple simulation studies have claimed clumps in disk galaxies enable the funneling of gas to the centre to form the galactic bulge (see Bournaud 2016, for a review).Our results feature the first observational sample indicating how such a link could manifest.The observed trend in Fig. 6 is consistent with an evolutionary trajectory beginning with the appearance of clumps, which leads to the dynamical friction and torques that drive gas (possibly also some the clumps themselves) to the centre.As mentioned in Sec.4.2, there is only a mild contribution to the negative correlation from a decline of the galaxy disks in our sample.Hence we expect the disk to mainly survive this possible scenario.Therefore the clumpiness decreasing with increase in bulge-to-disk ratio would be under the combined effects of migration as well as destruction by stellar feedback (Elmegreen et al. 2008;Ceverino et al. 2010;Bournaud et al. 2011Bournaud et al. , 2014;;Mandelker et al. 2014Mandelker et al. , 2017)). On the other hand, our results also agree with the scenario of bulges leading to the stabilization of the gas in disks (Martig et al. 2009;Agertz et al. 2009;Ceverino et al. 2010;Hopkins et al. 2023).This process in turn would prevent the formation of clumps resulting in lower clumpiness in galaxies with high bulge-to-disk ratio.Assuming that galaxies without a dominant bulge would be clumpy, this scenario would independently explain the negative correlation in Fig. 5. Nevertheless, the dynamical effect of the clumps cannot be disregarded.Hence, even if we reject the possibility of the individual clumps migrating to the core, one could still expect the first scenario to be contributing simply through driving gas inwards.This is especially true given that we detect these clumps in the stellar-mass tracing near-IR which will inevitably add to the dynamical friction experienced by the gas within the disks (e.g., Bournaud et al. 2014). Finally, Fig. 7 suggests that the underlying negative slope of the correlation in Fig. 6 increases as we go from rest-frame near-IR to optical.It should be noted that this is observed only if one combines the whole sample at 1.0 < z < 1.5 with galaxies showing clumps in both F115W and F356W filters.It is not as evident separately in the two mass bins shown in Fig. 10 and 11 in the appendix.This correlation can be interpreted as the ratio of color of clumps and the color of the underlying disks, since the latter is the denominator in the calcu-lation of clumpiness.Therefore, Fig. 7 suggests that galaxies with lower bulge-to-disk ratio have clumps and their parent disks with similar colors.However those with more dominant bulges have clumps that are bluer than their disks.This could be due to contributions from two separate populations resulting from in-situ as well as ex-situ (accreted clumps/minor-mergers) formation (Mandelker et al. 2014(Mandelker et al. , 2017;;Zanella et al. 2019).It is expected that in-situ clumps are younger than their parent galaxies, whereas clumps that fall into the galaxy may introduce older stellar populations.However, this interpretation ignores contributions from dust attenuation resulting in reddening.A definitive conclusion requires studying the spectral energy distribution of individual clumps and placing them within the framework of this study. CONCLUSIONS Our investigation aims to simultaneously map clumps in a mass-complete sample of galaxies at z = 1 − 2 along with their underlying stellar morphology.It has been made possible by the high-resolution rest-frame optical and near-IR capabilities of JWST/NIRCam.We find clumps to not only be limited to optical wavelengths sensitive to young stellar populations (as suggested previously by detection of UV-bright clumps), but also almost equally in near-IR light suggesting an imprint on the stellar distribution.We find a statistically significant correlation between bulge-to-disk ratio, determined in rest frame near-IR, and the clumpiness in individual galaxies.We find a moderate to strong negative correlation between the two: as the clumpiness decreases, the bulge becomes more prominent.This result strongly suggests that the central bulge is evolutionarily linked to clumps.Finally, we find that this correlation is steeper in optical than in near-IR, suggesting multiple formation mechanisms of the observed clumps. We would like to express our gratitude to the anonymous referee who made valuable comments that considerably improved the quality of this work.LCH was supported by the National Science Foundation of China (11721303,11991052,12011540375,12233001), the National Key R&D Program of China (2022YFF0503401), and the China Manned Space Project (CMS-CSST-2021-A04, CMS-CSST-2021-A06).XD is supported by JSPS KAKENHI Grant Number JP22K14071.BSK would like to thank Benjamin Magnelli and Carlos Gómez-Guijarro for their brilliant insights into this work and valuable suggestions.(Leslie et al. 2020) for the respective stellar mass and redshift of the galaxies.The colorbar represents the distance from the main sequence, defined as the log of the ratio between the measured SFR and that expected for the galaxy had it been exactly on the main sequence. Figure 1 . Figure 1.The clump detection: (top-left) An F115W image of a galaxy within our sample (with intermediate levels of clumpiness) with an equivalent PSF as that for F444W.The 2 σ segmentation map for the galaxy at F444 has a red color-map to distinguish it from other sources.(Top-middle) This image is smoothed using a Gaussian filter of σ = 4 pixels and then subtracted from the original image to get the contrast image.The core is masked during this detection algorithm and is shown with an overlaid blackened mesh.The remaining 'red region' gives the net galaxy flux without the bulge.(Top-right) The source detection algorithm discussed in Sec.3.2 results in the collection of confirmed clumps highlighted in blue.The net flux in these regions is divided by the disk flux to get the clumpiness.(Middle and bottom) The lower panels show F115W images and their corresponding contrast images (with highlighted regions of clumps) of other objects within our sample.The S17 IDs for each object is also provided. Figure 2 . Figure 2. Contrast images similar to those in Fig. 1, but of galaxies which had no clumps detected. Figure 3 . Figure3.Examples of galaxies with detected clumps.For each, the rest-frame optical (F115W) and rest-frame near-IR (F356W) images are shown.The detected clumps using the method shown in Fig.1are shown with a blue color scheme.It should be noted that the the clumps may not always be clearly detected by eye on these original images, for which one would rather require the respective contrast maps. Figure 4 . Figure 4. (Top) Histogram showing the bulge-to-disk ratio distribution of all galaxies that have both significant bulge and disk flux measurements.(in grey, which makes up a fraction 'ftot' of all galaxies in S17 within our mass and redshift brackets).The black histogram is made up of the subset of sources that have clumpiness detected at restframe wavelength ∼ 1 µm (f200W and f277W for the two redshift windows), which makes up a fraction 'f clumpy ' of all the S17 galaxies.The probabilty of the ftot and 'f clumpy ' samples being from the same distribution (the p-value) is also shown.(Middle and bottom) The same histogram but limited to the stellar-mass ranges log(M * /M⊙) = 10.0 − 10.5 and log(M * /M⊙) = 10.0−11.0respectively.The 'f clumpy/bin ' is the 'f clumpy ' counted for the individual stellar-mass windows.The two columns correspond to the redshift ranges 1.0 − 1.5 and 1.5 − 2.0. Figure 6 . Figure 6.Bulge-to-disk ratio vs clumpiness at rest-frame wavelength ∼ 1 µm from Fig.5, split into two redshift windows 1.0 − 1.5 and 1.5 − 2.0.The three rows from top to bottom are the same as in Fig. 4, which correspond to the full mass range (above the mass-completeness limit of log(M * /M⊙) = 10.0), log(M * /M⊙) = 10.0 − 10.5 and log(M * /M⊙) = 10.5 − 11.0 respectively.The thick grey line represents the average of the data-points and the Pearson correlation coefficient (r) for the plotted data is provided in the bottom-right corner in each panel.Finally, the grey points mark the upper-limit values for all sources without clumpiness but have robust bulge-to-disk ratio. Figure 12 . Figure12.Bulge-to-disk ratio vs clumpiness at rest-frame wavelength ∼ 1 µm from Fig.5.Only the galaxies with measured SFR in S17 and with values within or above a 0.3 dex scatter of the star-forming galaxy main sequence(Leslie et al. 2020) for the respective stellar mass and redshift of the galaxies.The colorbar represents the distance from the main sequence, defined as the log of the ratio between the measured SFR and that expected for the galaxy had it been exactly on the main sequence.
8,499.4
2023-09-11T00:00:00.000
[ "Physics" ]
Biochemical Changes in Cardiopulmonary Bypass in Cardiac Surgery: New Insights Patients undergoing coronary revascularization with extracorporeal circulation or cardiopulmonary bypass (CPB) may develop several biochemical changes in the microcirculation that lead to a systemic inflammatory response. Surgical incision, post-CPB reperfusion injury and blood contact with non-endothelial membranes can activate inflammatory signaling pathways that lead to the production and activation of inflammatory cells, with cytokine production and oxidative stress. This inflammatory storm can cause damage to vital organs, especially the heart, and thus lead to complications in the postoperative period. In addition to the organic pathophysiology during and after the period of exposure to extracorporeal circulation, this review addresses new perspectives for intraoperative treatment and management that may lead to a reduction in this inflammatory storm and thereby improve the prognosis and possibly reduce the mortality of these patients. Introduction Cardiopulmonary bypass was used for the first time in cardiac surgery over 60 years ago, and since then, many advances have occurred in the conduct of CPB and cardiac anesthesia [1].CPB comprises a set of devices and techniques that replace cardiac and pulmonary function during surgery.Several cardiopulmonary bypass methods are used for mechanical support of respiratory and cardiocirculatory failure, so that there is adequate blood flow to supply oxygen to the vital organs [2].CPB makes it easier for procedures that need to enter the intracardiac space, such as valve replacement, to be performed with a reduced amount of blood and under controlled conditions [3]. Substantial evidence from several studies on CPB reports that this procedure stimulates the inflammatory process and that generates reactive oxygen and nitrogen species that overcome endogenous antioxidants, resulting in increased oxidative stress that significantly affects the rates of mortality and postoperative morbidity [1,[3][4][5][6][7][8].Several factors seem to be the cause of the systemic inflammatory reaction, such as blood contact with the CPB device's surface, surgical trauma, endotoxemia, blood loss and ischemic reperfusion injury [9].Thus, there is activation of the complement system and the immune system, leukocytes and endothelial cells, which, in turn, are responsible for the release of multiple pro-inflammatory cytokines [10].Some deleterious effects that influence the high mortality rate are related to the inflammatory response syndrome which, to a large extent, is related to the interface of blood components, air and the artificial surfaces of the device [2,11].Despite significant advances in recent years, oxidative stress and inflammation remain major concerns when using CPB [12]. Cardiac surgery with cardiopulmonary bypass is associated with systemic inflammatory response, a clinical condition that is characterized in some cases with severe hypotension due to low systemic vascular resistance during and after cardiopulmonary bypass, in which some of these cases do not respond to volume or catecholamines.This condition is known as vasoplegic syndrome [11,13].The pathophysiology is complex and includes, in addition to the intense inflammatory response, dysregulation of the vasodilator and vasoconstrictor properties of vascular smooth muscle cells [11].Although norepinephrine is confirmed as a first-line therapy for the treatment of vasoplegia, currently, many randomized studies have identified new adjuvant therapies to control metabolic and oxidative stress as a pharmacological strategy to reduce the incidence of vasoplegic syndrome [11]. In this narrative review, we will present our study, focusing on the inflammatory response, vasoplegic syndrome and new insights regarding adjuvant and non-catecholaminergic treatment for the inflammatory response generated during the cardiopulmonary bypass method. Search Strategy We used PubMed, Web of Science and Embase to explore/find studies pertaining to cardiopulmonary bypass.The search was implemented by using the following keywords: "cardiopulmonary bypass", "oxidative stress", "inflammation", "coagulation", "ischemia/reperfusion", "vasoplegic syndrome", "antioxidants", "free radical", "cardiac surgery", "lung/kidney injury".In that context, we conducted a literature search in the PubMed search engine (https://pubmed.ncbi.nlm.nih.gov(accessed on 27 June 2023)).This narrative review is the result of the works having been thoroughly scrutinized by specialists in the field, in order to critically include or exclude them. In this review, we included all studies which involved cardiopulmonary bypass.We did not restrict the types of articles, and we included peer-reviewed studies, book chapters, reviews, letters to editors and animal studies.The studies included in this review were all limited to English only.We excluded studies that did not focus on cardiopulmonary bypass and non-English studies. Pathophysiology of a Cardiopulmonary Bypass During a CPB, the pumping functions of the heart are performed by a mechanical pump and the functions of the lungs are replaced by a device capable of performing gas exchange with the blood.In this context, to understand the systemic complications related to this procedure, it is necessary to understand the blood circuit during cardiopulmonary bypass [14]. In CPB, venous blood is diverted from the heart and lungs, arriving at the right atrium of the patient through cannulas inserted in the superior and inferior vena cava.Through a single channel, venous blood is taken to the oxygenator, a reservoir made of semipermeable membranes to separate blood from oxygen and perform gas exchanges [15,16]. From the oxygenator, blood is directed to a part of the patient's arterial system, usually the ascending aorta, where it travels through the arterial system and is distributed to all organs, supplying oxygen to the tissues for carrying out vital processes and removing the carbon dioxide produced by them.After circulating through the tissue capillaries, the blood returns to the superior and inferior vena cava system, where it is continuously redirected to the CPB machine until the end of the surgery [16]. CPB control is carried out by means of a machine, which aspirates and propels the blood.The machine consists of a control panel, oxygenator, reservoir, arterial pump that replaces the contractile function of the heart, cardioplegia (system for mixing blood and cardioplegic solution) and tubes or cannulas (arterial and venous) [15]. After opening the patient's chest, the surgeon introduces the cannulas into the right atrium and the inferior and superior vena cava.Thus, oxygen-poor blood is diverted and directed to a reservoir (Figure 1).In the machine, the heat exchanger rewarms or cools the blood as needed.Then, an oxygenator removes carbon dioxide from the blood and adds oxygen.After that, the blood passes through a filter that removes air bubbles and other emboli before returning to the body through a pump that directs the blood to the aorta.[17].The role of arterial cannulation, performed by inserting a cannula into an artery, is to return blood to the patient's circulation.Before returning to the body, the blood is filtered to ensure that no particles, debris, or gaseous emboli enter the circulation [15,16]. removing the carbon dioxide produced by them.After circulating through the tissue capillaries, the blood returns to the superior and inferior vena cava system, where it is continuously redirected to the CPB machine until the end of the surgery [16]. CPB control is carried out by means of a machine, which aspirates and propels the blood.The machine consists of a control panel, oxygenator, reservoir, arterial pump that replaces the contractile function of the heart, cardioplegia (system for mixing blood and cardioplegic solution) and tubes or cannulas (arterial and venous) [15]. After opening the patient's chest, the surgeon introduces the cannulas into the right atrium and the inferior and superior vena cava.Thus, oxygen-poor blood is diverted and directed to a reservoir (Figure 1).In the machine, the heat exchanger rewarms or cools the blood as needed.Then, an oxygenator removes carbon dioxide from the blood and adds oxygen.After that, the blood passes through a filter that removes air bubbles and other emboli before returning to the body through a pump that directs the blood to the aorta.[17].The role of arterial cannulation, performed by inserting a cannula into an artery, is to return blood to the patient's circulation.Before returning to the body, the blood is filtered to ensure that no particles, debris, or gaseous emboli enter the circulation [15,16]. Figure 1. Activation of the coagulation pathway and oxidative stress in cardiopulmonary bypass (CPB).The contact surface is responsible for producing activated factor XII (FXIIa), which induces the intrinsic coagulation pathway, leading to thrombin formation.Factor XIIa converts the highmolecular-weight kininogen (HMWK) into bradykinin.Bradykinin stimulates the release of nitric oxide and inflammatory cytokines.Cytokines stimulate the extrinsic pathway of coagulation and potentiate the formation of thrombin and clot formation and have direct effects on leukocytes.CPB initiates multiple processes that stimulate the production of reactive oxygen species (ROS).The main forms of cardiac ROS are superoxide (O2 − ) and hydrogen peroxide (H2O2). The function of the arterial pump is to replace the function of the heart.It sends the blood from the reservoir and ensures an artificial blood circulation.The way in which blood flow should be provided, continuous or pulsatile, has been the subject of debate, which continues to this day [15]. Systemic Response to CPB In cardiac physiology, the cardiovascular system is a complex set of vessels, which driven by the heart, make the blood circulate throughout the body.The veins are responsible for the flow in centripetal way, whereas the arteries are responsible for the flow in centrifugal way.Within this system, the microcirculation-venules, arterioles and The function of the arterial pump is to replace the function of the heart.It sends the blood from the reservoir and ensures an artificial blood circulation.The way in which blood flow should be provided, continuous or pulsatile, has been the subject of debate, which continues to this day [15]. Systemic Response to CPB In cardiac physiology, the cardiovascular system is a complex set of vessels, which driven by the heart, make the blood circulate throughout the body.The veins are responsible for the flow in centripetal way, whereas the arteries are responsible for the flow in centrifugal way.Within this system, the microcirculation-venules, arterioles and capillaries-is the place where gas exchange occurs and where the regulatory mechanisms of peripheral blood flow are located [18]. During CPB, the physiology of the circulation is completely modified by the introduction of a non-pulsatile flow on the arterial side, which opposes an elevated venous pressure on the venous side of the circulation.This situation generates adaptation mechanisms, thus providing a "shunting" effect, sometimes harmful to the circulation, which may result in the development of systemic inflammatory response syndrome (SIRS) [18]. It is believed that factors associated with CPB, such as hemodilution, contact activation and induction of systemic inflammatory response, impair microcirculatory perfusion by affecting both transport and diffusion of oxygen at the microvascular level [19]. Another form of damage to the microcirculation related to the use of CPB is the formation of microbubbles, which circulate in the bloodstream and lodge in the capillaries, causing obstruction, promoting ischemia, inflammation, complement activation, platelet aggregation and clot formation [20].In addition, CPB is responsible for other changes in circulation, such as replacement of pulsatile physiological flow to continuous, which increases the pressure on the venous side.In the microcirculation, the continuous flow induces to phenotypic cell adaptation that also may result in the development of SIRS [18]. Hypothermia associated with cardiopulmonary bypass aims to reduce the metabolic needs of patients, offering additional protection to the body, especially the vital organs, avoiding anoxia injuries [21].However, hypothermia reversibly inhibits clotting factors and platelets, and rapid rewarming and hyperthermia are associated with brain injury [22]. In the first moments of CPB, hypotension is common due to the reduction in the perfusion flow, the reduction in blood viscosity due to hemodilution, and the increase in bradykinin.After this period, the body begins a compensatory response that, particularly with hypothermia, the elevation of systemic vascular resistance and the absence of pulsatility in the circulation, result in hypertension [23]. However, consequently, renal vasoconstriction occurs, predisposing the kidneys to ischemia and injury [24].In addition, hemodilution with crystalloid solutions, when in excess, predisposes the patient to the formation of edema and watery diuresis rich in electrolytes, favoring hydroelectrolytic imbalance [25,26]. Hemorrhagic disorders related to CPB are related to changes in blood clotting, since blood circulates through tubes and devices that are non-endothelial surfaces.This imbalance of blood hemostasis during CPB is the most common occurrence of thrombotic events, while, after CPB, bleeding is usually reported [23,26,27]. Regarding the lungs, there is an increase in water leakage into the interstitium due to inflammatory cells, surfactant inactivation and atelectasis and reduction in lung capacity, which, together with exposure to hypothermia maintained during CPB, cause damage to the pulmonary endothelium [28,29]. Metabolic Response Patients undergoing CPB are subject to a significant hydroelectrolytic imbalance.The migration of water between the different compartments depends on the concentration of electrolytes for that the body's water balance is maintained.Thus, when the patient undergoes CPB, important imbalances may occur, such as excessive hydration, due to the increased number of crystalloids in the perfusate [30]. The hyperhydrated patient may present facial or generalized edema, ascites, pleural effusion, respiratory failure, asthenia, disorientation, delirium and seizures or other neurological manifestations.Hyperhydration is a complication that is accentuated in patients with low amounts of proteins in the body, representing another risk factor for this individual when performing surgical procedures, considering that the oncotic pressure of the plasma is reduced and allows extravasation of liquids from the plasma to the interstitial space, especially if the liquid supply is not adequately dimensioned [31,32]. In this context, sodium (Na + ) is the main ion for sustaining water balance, since its loss can cause a reduction in extracellular osmotic pressure and, consequently, cause the transition from this compartment to the intercellular compartment.However, if there is an increase in extracellular sodium levels, there is an increase in osmotic pressure and, consequently, this results in the interstitial accumulation of water, with the development of edema [31,32]. Another intracellular electrolyte is potassium (K + ), which is responsible for conducting the electrical impulse and performing muscle contraction.Its unbalanced extracellular accumulation, characterized by hyperkalemia, and can be harmful, reducing electrical conduction and myocardial contraction strength, that is, causing cardiac arrest, which demonstrates its significance in cardiorespiratory surgical procedures in patients undergoing CPB, especially during the infusion of the cardioplegia solution [31,32]. Other studies are in the same direction, highlighting calcium (Ca 2+ ), as a fundamental electrolytic substance for bone formation and blood flow regulation, meaning that its lack characterizes hypocalcemia and can cause the same risks mentioned in relation to hyperkalemia, that is, it can cause a cardiac arrest, resulting in death, and the balance of the amount of calcium also tends to reduce the risk of blood clotting during and after the surgery [31,32]. Magnesium (Mg 2+ ) is an important electrolyte in activating metabolism, including glycemic and protein metabolism, in addition to enabling neuromuscular contraction, but, in high concentration, hypermagnesemia causes risks, with regard to unbalanced muscle relaxation, such as in the heart muscles, as well as causing cardiac disorders related to the process of electrical conduction [33]. Hydroelectrolytic alterations may therefore mean imminent risks to surgically assisted cardiac patients, so that when undergoing CPB, the individual must have balanced systemic functions or, on the contrary, when identifying the respective alterations, the artificial organs must restore the balance [30]. An elevation of lactate levels (hyperlactatemia) is detectable in 10% to 20% of patients undergoing CPB and is associated with adverse effects such as increased morbidity and mortality [34].Type A hyperlactatemia is the most common type in patients after cardiac surgery and is strongly associated with metabolic acidosis.It results from anaerobic metabolism, when the supply of oxygen is reduced below the requirement of cellular metabolism, resulting in tissue hypoxia [35]. Some factors that lead to dysoxia during CPB and culminate in increased lactate levels are comorbidities intrinsic to the patient, such as preoperative dehydration, high levels of preoperative serum creatinine, active endocarditis, congestive heart failure, low left ventricular ejection fraction, hypertension, atherosclerosis, low preoperative and perioperative hemoglobin values and preoperative low cardiac output [36]. The reference range given for blood lactate is 0.5-2.2mmol/L under certain physiological conditions, with some variation between authors.Alert levels are usually defined as being above 3 mmol/L during CPB, while other authors point out that a lactate peak above 4.0 mmol/L is a better predictor of morbidity and mortality.And there are still those who suggest that a situation of significant hyperlactatemia has a level greater than 5 mmol/L [37]. Lactate metabolism is closely related to glucose metabolism, as both compounds are used to biosynthesize each other.Therefore, metabolic disorders that affect glucose metabolism alter lactate homeostasis [38].In cardiac surgeries, poor glycemic control is associated with the induction of type B hyperlactatemia in the perioperative period and studies show that low levels of glucose and high levels of lactate are closely related in these patients and, commonly, levels are spontaneously regulated up to 24 h [39]. Systemic Inflammatory Reaction CPB is associated with microvascular alterations in several pathological aspects.Endothelial cell injury and consequent acute inflammation with vascular damage, alteration of the coagulation cascade, reperfusion injury and gaseous microemboli corroborate organ dysfunctions during and after CPB [40]. The pathophysiology of the systemic inflammatory response to CPB is multifactorial and can be divided into two main phases: "early" and "late".The first phase occurs when blood makes contact with non-endothelial surfaces of the system cannulas ("contact activa-tion").The late phase is caused by ischemia-reperfusion injury (I/R injury), endotoxemia, coagulation disorders and heparin/protamine reactions [41]. Endothelial Injury The endothelium is a participant in different physiological functions, including the control of vascular tone and permeability, hemostasis and immune system responses.CPB can lead to an inflammatory state similar to SIRS, with endothelial damage [42]. There are two main mechanisms of endothelial injury-neutrophil-mediated and non-neutrophil-mediated.In the first, integrins on the surfaces of neutrophils bind to molecules on endothelial cells, generating oxidative stress due to the reduction of ferric iron to ferrous iron caused by superoxide anion, which is generated from xanthine oxidase, produced by neutrophil elastase introduced into endothelial cells [40,43].Neutrophilmediated endothelial cytotoxicity can also be caused by intracellular mechanisms involving nitric oxide synthase [44]. In non-neutrophil-mediated injury, circulating pro-inflammatory cytokines (TNFa and IL-1) directly stimulate endothelial cells, leading to a pathological increase in permeability, causing tissue edema and impaired oxygen exchange, causing multiorgan dysfunction [45]. Alteration of the Coagulation Cascade Contact with the artificial surface of the circuit leads to activation of the coagulation cascades (Figure 1) and the alternative complement pathway.Activated factor XII (XIIa) leads to the generation of bradykinin and the activation of the intrinsic coagulation pathway; bradykinin is a potent vasoactive peptide that alters endothelial permeability, smooth muscle tone and induces the production of cytokines and nitric oxide [46].As artificial surfaces, unlike the endothelium, have no regulatory molecules for suppressing the complement system; they lead to an excessive inflammatory response and capillary leakage, which has been demonstrated as a complication of cardiopulmonary bypass [46]. Furthermore, cytokines generated from brakinin induction may be able to activate the extrinsic pathway of coagulation and leukocytes.Together, there is an increase in the common pathway of coagulation, so activated factor X (FXa) converts prothrombin (II) to thrombin (IIa), which then cleaves fibrinogen (I) to fibrin (Ia), resulting in subsequent formation of clots [47,48].Factor IIa increases the expression of adhesion molecules, such as platelet activating factor (PAF), P-selectin and E-selectin by endothelial cells, increasing adhesion and activation of defense cells, such as neutrophils [49]. The factor Xa and the thrombin formed, even with the administration of heparin, are related to inflammatory processes and tissue remodeling.Thus, coagulation is closely related to inflammatory processes during CPB [50]. Oxidative Stress Aortic clamping in CPB blocks coronary flow with a decrease in oxygen supply to myocytes, altering electrical activity and ceasing cardiac mechanical activity.This maneuver of aortic clamping and subsequent removal of the clamp in CPB provides favorable conditions for the formation of free radicals, triggering oxidative stress [12,51]. In ischemia, the supply of oxygen to the mitochondria ceases, interrupting the Krebs cycle.Thus, the generation of ATP becomes primarily anaerobic.This change is accompanied by an increase in cytosolic lactate and a reduction in intracellular pH.The reduction in the cellular concentration of ATP interrupts the activity of active pumps that are important in ionic homeostasis, such as the sodium and potassium pump and the sarcoplasmic reticulum calcium,S ATPase, resulting in cytosolic overload of Na + and Ca 2+ , which prevents cell repolarization, leading to contractile dysfunction.Additionally, high concentrations of Ca 2+ in the cytosol activate enzymes associated with lipid peroxidation, production of reactive oxygen species (ROS), dysfunction of contractile proteins, loss of cell function and, ultimately, cell death [52,53]. In tissue reperfusion with the end of aortic occlusion, it promotes the formation of ROS in the cytosol, mitochondria, peroxisome, lysosomes and plasmatic membrane of polymorphonuclear leukocytes activated during the CPB ischemia period.Thus, the reintroduction of the oxygen molecule in the ischemic heart tissue causes free radicals to react with polyunsaturated fatty acids from the cell membrane.This starts a chain of oxygen-dependent lipid deterioration where there formation of lipid peroxides and hydroxy-peroxides occurs, which are generated rapidly by the NADPH oxidase complex in response to cytokines that result in the generation of excessive amount of ROS, decreased membrane fluidity and increased permeability and consequent damage to the cell membrane, contributing to functional impairment in CPB [53][54][55]. When these ROS accumulate disproportionately to the body's antioxidant capacity, we are facing a situation called oxidative stress.The excess of reactive species causes arrhythmias, reduction of the systolic function and change in the permeability of the myocytes membrane.To avoid this, hydroperoxide radicals are removed from cells by enzyme systems with antioxidant functions, generally present in the myocardium.These enzyme systems with antioxidant action are responsible for limiting the intracellular accumulation of reactive species during normal metabolism, reducing oxidative damage to proteins, lipids and DNA [53][54][55][56][57]. Vasoplegic Syndrome Associate to CPB One of the main changes that occur in patients undergoing cardiac surgery with cardiopulmonary bypass is the vasoplegic syndrome.It is characterized as a circulatory shock, of the distributive type, coursing with systemic vasodilation.It has a pathophysiology and treatment similar to that of sepsis, with some peculiarities [58][59][60][61].In this sense, the vasoplegia observed after CPB can affect a considerable number of patients, with studies showing an incidence greater than 50%, and is characterized by a low systemic vascular resistance, with normal or slightly increased cardiac output that occurs in the first 24 h, associated with a cardiac index (CI) greater than 2.2 L/kg/m 2 .This drop in SVR leads to tissue hypoperfusion and progression to organ dysfunction [62,63].The treatment of this condition is based on the use of vasopressors to maintain mean arterial pressure within recommended values, and recent research has associated the use of new, non-catecholaminergic strategies as adjuvant treatments. Furthermore, statistics indicate that approximately 25% of patients undergoing cardiac surgery with cardiopulmonary bypass present vasoplegic syndrome, with variations depending on the associated risk factors, and with high rates of morbidity and mortality in these patients [64,65].The mechanisms by which CPB leads to vasoplegia are multifactorial and many signaling pathways are still being studied, but factors inherent to patients such as obesity, diabetes, autoimmune diseases, intraoperative hyperthermia seem to contribute to the increased incidence, in addition to the prolonged duration of CPB [64]. Regulatory Mechanisms: Vasoconstriction and Vasodilation Physiologically, the dynamics of vascular smooth muscle contraction are strongly linked to the calcium ion.It occurs due to an intracellular response to the increase in this positive ion, which forms a complex with calmodulin, and this complex activates signaling pathways that lead to the phosphorylation of the myosin light chain that will bind to actin, promoting the shortening of muscle fibers and the contraction effect (Figure 2).There are several ways in which increased intracytoplasmic calcium can occur; the main ones are through G protein-linked receptors, of the Gq type, in which the activation of this receptor internally signals the cell to mobilize calcium from the endoplasmic reticulum; the responsible receptors are alpha-1 adrenergic receptor, vasopressin-1 receptor and type 1 angiotensin receptor [66,67]. signaling pathways that lead to the phosphorylation of the myosin light chain that will bind to actin, promoting the shortening of muscle fibers and the contraction effect (Figure 2).There are several ways in which increased intracytoplasmic calcium can occur; the main ones are through G protein-linked receptors, of the Gq type, in which the activation of this receptor internally signals the cell to mobilize calcium from the endoplasmic reticulum; the responsible receptors are alpha-1 adrenergic receptor, vasopressin-1 receptor and type 1 angiotensin receptor [66,67]. Figure 2. Physiology of contraction and relaxation of vascular smooth muscle. Muscle contraction occurs in response to the activation of receptors present in the membrane, such as the alpha-1 adrenergic receptor (a1R), the vasopressin-1 receptor (V1R) and the angiotensin type-1 receptor (AT1R); their activation by the selective agonist promotes the phosphorylation of the Gq protein and with that, the signaling pathway starts.With this, there is the release of calcium by the endoplasmic reticulum through phosphatidylinositol-3 (IP3); the calcium released in the cytosol binds with calmodulin (CaM), forming the calcium-calmodulin complex that phosphorylates myosin and promotes contraction.On the other hand, the nitric oxide (NO) produced by the endothelium reaches the smooth muscle, converting GTP into cGMP.cGMP dephosphorylates myosin and promotes relaxation.In addition, NO activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.Angiotensin II (AgII); norepinephrine (NE); arginine-vasopressin (AVP); cyclic guanosine monophosphate (cGMP); guanosine triphosphate (GTP); diacyl glycerol (DAG); phospholipase C (PLC); phosphatidylinositol-4,5-bisphosphate (PIP2); protein kinase C (PKC).Norepinephrine (NE), which is the endogenous ligand of the alpha-1 adrenergic receptor, is released from nerve endings originating from the sympathetic chain, and epinephrine, a derivative of NE, is released from the adrenal and is also capable of binding to the alpha-1 adrenergic receptor.Arginine vasopressin (AVP) is released from the Muscle contraction occurs in response to the activation of receptors present in the membrane, such as the alpha-1 adrenergic receptor (a1R), the vasopressin-1 receptor (V1R) and the angiotensin type-1 receptor (AT1R); their activation by the selective agonist promotes the phosphorylation of the Gq protein and with that, the signaling pathway starts.With this, there is the release of calcium by the endoplasmic reticulum through phosphatidylinositol-3 (IP3); the calcium released in the cytosol binds with calmodulin (CaM), forming the calcium-calmodulin complex that phosphorylates myosin and promotes contraction.On the other hand, the nitric oxide (NO) produced by the endothelium reaches the smooth muscle, converting GTP into cGMP.cGMP dephosphorylates myosin and promotes relaxation.In addition, NO activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.Angiotensin II (AgII); norepinephrine (NE); arginine-vasopressin (AVP); cyclic guanosine monophosphate (cGMP); guanosine triphosphate (GTP); diacyl glycerol (DAG); phospholipase C (PLC); phosphatidylinositol-4,5-bisphosphate (PIP2); protein kinase C (PKC).Norepinephrine (NE), which is the endogenous ligand of the alpha-1 adrenergic receptor, is released from nerve endings originating from the sympathetic chain, and epinephrine, a derivative of NE, is released from the adrenal and is also capable of binding to the alpha-1 adrenergic receptor.Arginine vasopressin (AVP) is released from the hypothalamic axis and angiotensin II is regulated as part of the renin-angiotensin-aldosterone axis [66,67].All these signals are regulated in response to stress and can be pharmacologically modulated. To cause vasodilation or in the absence of vascular muscle contraction, it is possible to block the receptors or through the production of nitric oxide (NO) [44,68].It is produced from the enzymatic induction of nitric oxide synthase (NOS), which converts L-arginine into NO.There are three isoforms of NOS, eNOS, present in the endothelium and offer a constant production of NO to endothelial cells that can rapidly diffuse to vascular smooth muscle cells and exert its vasodilation effects [68].The iNOS, enzyme is inducible via oxidative stress and inflammatory cytokines (interleukin 1, tumor necrosis factor alpha and interferon gamma) and can produce significantly higher levels of NO when compared to eNOS [66,[69][70][71].These high levels of NO can react with superoxide, leading to peroxynitrite formation and cellular toxicity [72].And, finally, nNOS, present in neuronal cells and participates in functions such as neuronal plasticity and regulation of cerebral perfusion pressure, with mechanisms of vasoconstriction and relaxation of cerebral arteries, but this isoform of NOS participates in few of the systemic mechanisms of vasodilation induced by NO [73]. NO promotes vasodilation through several methods, the main one being related to the activation of guanalyl cyclase, an enzyme found in vascular smooth muscle that catalyzes the dephosphorylation of guanosine triphosphate to cyclic guanosine monophosphate (cGMP), which inhibits calcium entry through voltage-gated channels and activates intracellular proteins that are cGMP-dependent [62].NO also activates ATP-sensitive potassium channels (KATP), which promote potassium efflux and induces the cell to a hyperpolarized state.In a hyperpolarized state, the secondary intracellular cascade that leads to vasoconstriction is inhibited [74]. Vasoplegia These regulatory mechanisms of vasoconstriction are deregulated, to a greater or lesser extent, during and after the cardiopulmonary bypass ends (Figure 3).The passage of blood through CPB stimulates the activation of the complement cascade, production of ROS species and release of inflammatory mediators, such as the triad of cytokines, interleukin-1 (IL-1), interleukin-6 (IL-6) and tumor necrosis factor-alpha (TNF-α) [45,[75][76][77][78].All this inflammatory biochemistry is capable of acting in specific areas of the brain such as the locus coeruleus and the paraventricular nucleus of the hypothalamus, where the cells responsible for the hypothalamic-pituitary-adrenal axis are located; they stimulate these regions and lead to a reduction and desensitization of the alpha-1 adrenergic receptor and an increase in the inflammatory state, thus forming a cycle that is difficult to overcome (11).This reduction in receptors may be related to reduced gene expression in response to inflammation [66] or, more significantly, receptor desensitization, triggered by exaggerated catecholamine release in response to baroreflex-dependent stimulation [11,66]. These inflammatory mediators can also increase the production of nitric oxide (NO) through the induction of iNOS by activating the nuclear factor, kappa B (NF-κB), which is a vasodilator and, in excess, can result in vasoplegic shock [79][80][81].As a response to shock, the body stimulates the release of vasopressors via another region of the hypothalamus and the renin-angiotensin-aldosterone system in the production of angiotensin II to try to maintain the contraction of vascular smooth muscle and tissue perfusion, but in the persistence of shock, these mechanisms suffer depletion and saturation [82,83].Despite this, vasopressin is of considerable importance in the process of controlling vasoplegia, as it is capable not only of neutralizing the effects of NO, but also of decreasing NO production [61,84]. Although the body maintains internal control mechanisms in response to vasoplegia, such as the release of vasopressors and sustained activation of the sympathetic system [11], recent studies have shown that circulating levels of vasopressin are reduced a few hours after the cardiopulmonary bypass, suggesting that CPB may deplete this vasopressor and contribute to the installation/maintenance of vasoplegic syndrome [85,86].Not only vasopressin, but the activity of several types of K + channels promotes potassium efflux and membrane hyperpolarization.Among these channels is the ATP-sensitive K + channel (KATP) [74].Activation of this channel has been strongly linked to the development of pathological vasodilation [62].Several mechanisms may explain the activation of the KATP channel in the vasoplegic syndrome associated with CPB, including NO release, vasopressin deficiency, hypoxia and acidosis [62,86,87].With the return of blood to the systemic circulation after passing through the oxygenation membrane during cardiopulmonary bypass, there is a depletion of the endogenous ligands of the alpha-1 adrenergic receptors, vasopressin-1 and the type-1 angiotensin receptor, in addition to the downregulation of the receptors themselves, making vasoconstriction an event that is unlikely to occur.The reduction in receptor concentration leads to a reduction in intracellular calcium and with it, a reduction in contraction.In addition, CPB induces the production of reactive oxygen species (ROS) and releases inflammatory mediators that can lead to the desensitization of adrenoreceptors and induces the production of nitric oxide (NO).NO leads to an increase in cGMP, which inhibits calcium in cells, leading to muscle relaxation.NO also activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.NO further reacts with the superoxide anion radical (O2 − ) to form peroxynitrite (PN). These inflammatory mediators can also increase the production of nitric oxide (NO) through the induction of iNOS by activating the nuclear factor, kappa B (NF-κB), which is a vasodilator and, in excess, can result in vasoplegic shock [79][80][81].As a response to With the return of blood to the systemic circulation after passing through the oxygenation membrane during cardiopulmonary bypass, there is a depletion of the endogenous ligands of the alpha-1 adrenergic receptors, vasopressin-1 and the type-1 angiotensin receptor, in addition to the downregulation of the receptors themselves, making vasoconstriction an event that is unlikely to occur.The reduction in receptor concentration leads to a reduction in intracellular calcium and with it, a reduction in contraction.In addition, CPB induces the production of reactive oxygen species (ROS) and releases inflammatory mediators that can lead to the desensitization of adrenoreceptors and induces the production of nitric oxide (NO).NO leads to an increase in cGMP, which inhibits calcium in cells, leading to muscle relaxation.NO also activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.NO further reacts with the superoxide anion radical (O 2 − ) to form peroxynitrite (PN). In addition, the surgical incision is initially capable of triggering an inflammatory response, even if to a lesser extent, when compared to CPB.As the surgery progresses and the patient is coupled to the cardiopulmonary bypass system, systemic inflammatory response syndrome can occur [80].Of the generated cytokines, it is believed that interleukin-6 is the one most associated with vasoplegia, as it is notably a potent inhibitor of vascular contraction [64,88,89].As the contact time of the blood with the CPB equipment is one of the factors that contribute to the degree of the inflammatory response generated, as it continues, the secondary immune response occurs as a result of the reinfusion of blood from the circuit to the aorta.A cluster of hemolyzed cells and injured platelets stimulate a secondary immune response leading to increased inflammation and subsequent loss of vascular tone [80,90]. In refractory cases of vasoplegic syndrome, an increase in the dose of vasopressors is necessary; however, without effect on mean arterial pressure levels, the patient can return to CPB and a new attempt to withdraw from CPB can be performed.This may occur, as the inflammatory state generated may be capable of inhibiting the response of adrenergic receptors to catecholamines, through mechanisms that are still poorly understood [66]. Future Perspective and Antioxidant Agents Although there are endogenous control systems to reduce the oxidative stress generated by cardiopulmonary bypass, they alone are often unable to attenuate the damage caused [91].There are considerable biochemical changes during and after the CPB, such as oxygenation in non-endothelial membranes, reperfusion damage, absorption of nutrients in the CPB circuit [91,92].This context of inflammation and oxidative stress has been associated with postoperative complications in these patients, such as atrial fibrillation, acute kidney and lung and liver injury [12,51,[93][94][95].In this sense, several drugs with antioxidant properties are being investigated, either as individual therapies or in combined treatment to reduced oxidative damage. Miniaturized Cardiopulmonary Bypass The miniaturized cardiopulmonary bypass (mCPB) method was developed as a more biocompatible alternative to conventional cardiopulmonary bypass [96].It consists of a small, closed, heparin-coated circuit in which venous blood is returned to a diffusion membrane oxygenator through active drainage, which reduces mechanical trauma.Although some studies have shown considerable benefit in relation to conventional systems, mainly in the reduction of the inflammatory response and its associated complications [97,98], meta-analyses of randomized clinical studies show contrasting results, in which there is no significant difference in the incidence of primary outcomes such as stroke and mortality among mCPB compared with the CPB [99]. Low-Level Light Therapy As already mentioned, oxidative stress triggers several events that can lead to negative outcomes in the patient [12].Although new methods have been developed, such as mCPB and materials with better biocompatibility, the results are still uncertain.In this regard, although premature, the use of low-level light therapy (LLLT) in the red-to-near-infrared radiation has shown promise [100][101][102][103].The results of this method include a reduction in the inflammatory response, as lipid peroxidation, and hemolysis during cardiopulmonary bypass [100].Furthermore, it has been shown to be important in also reducing platelet loss and change of pattern of aggregation and CD62P (P-selectin) expression.This suggests that LLLT can stabilize platelet function during CPB and decrease the side effects associated with the interaction of cells with an non-endothelial surfaces [101]. Dexmedetomidine Some anesthetics may extend clinical benefits beyond anesthesia and may also offer anti-inflammatory support [104].Dexmedetomidine (DEX), an alpha 2-adrenergic receptor agonist, has been studied as a possible modulator of the inflammatory response caused by cardiopulmonary bypass in cardiac surgery [105][106][107].Bulow et al. [107] demonstrated that the use of dexmedetomidine attenuated the increase in inflammatory cytokines (IL-1β, IL-6, TNF-α and INF-γ) in patients undergoing cardiac surgery for up to 24 h after CPB.In addition, randomized clinical studies have shown that in patients undergoing cardiac surgery with CPB, the intraoperative administration of DEX reduced the levels of pro-inflammatory cytokines during and after CPB, in addition to presenting possible renal and cardiac protection [108][109][110][111][112]. Some mechanisms are suggested for this effect, such as the inhibition of noradrenaline overflow and activation of the vagus nerve and the nicotinic acetylcholine receptor, which are related to the suppression of inflammatory cytokines [113,114].In this sense, DEX is considered a promising candidate in modulating the inflammatory response, although more studies are needed to explore the effect of dexmedetomidine on the long-term prognosis of patients. N-Acetylcysteine N-acetylcysteine (NAC) is an acetyl derivative of L-cysteine with an active mercapto group.Although it is widely used as a mucolytic in respiratory syndromes, it has recently gained prominence, as studies have shown that the use of NAC prevents oxidative damage, inhibits apoptosis and the inflammatory response, and promotes glutathione synthesis in cells, one of the main endogenous antioxidants [115][116][117][118]. Regarding the heart, NAC can improve the systolic function of myocardial cells and cardiac function, in addition to protecting ventricular and vascular remodeling [115,[119][120][121][122]. In addition, NAC has exhibited important effects by reducing the levels of lactate and nitrogenous slags in the blood 24 h after cardiac surgery with CPB, suggesting a beneficial effect on peripheral and renal tissue perfusion [123].Furthermore, the prophylactic use of NAC attenuates the liver damage induced by cardiopulmonary bypass during cardiac surgery, in addition to reducing the incidence of acute kidney injury in this type of surgery [93,[124][125][126].However, more clinical studies are needed to standardize necessary doses and treatment times, as well as monitoring possible unwanted effects. Nitric Oxide (NO) NO has emerged as a promising alternative for protecting organs in cardiac surgeries, especially the kidneys and the heart itself.NO is formed in the human body from the oxidation of L-arginine via NO synthases, which are present in three isoforms (neuronal, endothelial and inducible) [127,128].Administered via inhalation, it generates selective pulmonary vasodilation, whereas, due to its short half-life, it does not have systemic action.However, its metabolites can circulate throughout the body and act on more distant organs.Currently, inhaled NO is not recommended for routine use in patients under mechanical ventilation, but it is commonly used in pulmonary arterial hypertension crises and in persistent hypoxemia in acute respiratory distress syndrome (ARDS) [127]. For a long time, the possible deleterious effects of NO on cardiac muscle were marked, due to the formation of free radicals from its degradation.However, more robust studies have shown worsening of ischemia/reperfusion injuries when NO synthases are inhibited and improved outcomes with the administration of exogenous NO [127,129]. The clinical applicability of exogenous NO in the context of cardiac surgeries with PCB still has few studies, but with promising results.In randomized clinical trials of NO administration in patients undergoing cardiac surgery with CPB, the development of acute kidney injury was lower compared to the placebo groups [130,131].In addition, recent studies have shown that the use of NO during CPB can contribute to cardioprotection, with significant reductions in troponin I levels and of the low cardiac output syndrome [132][133][134][135]. Thus, even with increasing evidence, there remain significant gaps, and more highquality, multicenter, high-volume research is needed in both adult and pediatric populations. Vitamin C Ascorbic acid has a great ability to donate electrons, which makes it a potent antioxidant capable of interrupting cascades of free radicals that cause lipid peroxidation.It also contributes to the immune system in processes such as neutrophil chemotaxis, phagocytosis by lymphocytes and cell renewal [136,137].In addition, vitamin C can phosphorylate the signaling pathway in erythrocytes, in addition to stimulating endothelial nitric oxide production, which can reduce blood loss and vasoplegic syndrome [138,139]. Furthermore, studies have been demonstrated the benefit of vitamin C supplementation, contributing to the reduction of arrhythmias (mainly atrial fibrillation) and duration of mechanical ventilation, ICU and hospital stay, despite not having demonstrated an improvement in mortality [138][139][140][141][142].Although its use is promising, even with limitations, studies that assess the form of administration, whether in bolus or continuous infusion, before, during or after CPB are necessary to better assess the effectiveness of vitamin C supplementation. Vitamin E Vitamin E comes in different forms (isomers), with α-tocopherol having the highest antioxidant potential.These isomers are present in cell membranes and have an antioxidant action by inhibiting lipid peroxidation.It has already been demonstrated that serum levels of vitamin E are reduced during and after cardiac surgery; however, the primary outcomes linked to its supplementation are still conflicting despite good experimental results, mainly linked to α-tocopherol [143][144][145][146]. Conclusions Cardiac surgery using cardiopulmonary bypass, although not perfect, remains essential within intraoperative management.The inflammatory state and the production of reactive oxygen species and oxidative stress remain challenges within the medical sciences.In addition, the use of pharmacological strategies with antioxidant potential that aim to reduce these radicals have a promising potential in reducing CPB complications such as the vasoplegic syndrome.Thus, more research needs to be carried out, whether in basic science or randomized controlled clinical studies, in addition to more rigid intraoperative management. Figure 1 . Figure 1.Activation of the coagulation pathway and oxidative stress in cardiopulmonary bypass (CPB).The contact surface is responsible for producing activated factor XII (FXIIa), which induces the intrinsic coagulation pathway, leading to thrombin formation.Factor XIIa converts the highmolecular-weight kininogen (HMWK) into bradykinin.Bradykinin stimulates the release of nitric oxide and inflammatory cytokines.Cytokines stimulate the extrinsic pathway of coagulation and potentiate the formation of thrombin and clot formation and have direct effects on leukocytes.CPB initiates multiple processes that stimulate the production of reactive oxygen species (ROS).The main forms of cardiac ROS are superoxide (O 2 − ) and hydrogen peroxide (H 2 O 2 ). Figure 2 . Figure 2. Physiology of contraction and relaxation of vascular smooth muscle.Muscle contraction occurs in response to the activation of receptors present in the membrane, such as the alpha-1 adrenergic receptor (a1R), the vasopressin-1 receptor (V1R) and the angiotensin type-1 receptor (AT1R); their activation by the selective agonist promotes the phosphorylation of the Gq protein and with that, the signaling pathway starts.With this, there is the release of calcium by the endoplasmic reticulum through phosphatidylinositol-3 (IP3); the calcium released in the cytosol binds with calmodulin (CaM), forming the calcium-calmodulin complex that phosphorylates myosin and promotes contraction.On the other hand, the nitric oxide (NO) produced by the endothelium reaches the smooth muscle, converting GTP into cGMP.cGMP dephosphorylates myosin and promotes relaxation.In addition, NO activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.Angiotensin II (AgII); norepinephrine (NE); arginine-vasopressin (AVP); cyclic guanosine monophosphate (cGMP); guanosine triphosphate (GTP); diacyl glycerol (DAG); phospholipase C (PLC); phosphatidylinositol-4,5-bisphosphate (PIP2); protein kinase C (PKC). Figure 3 . Figure 3. Pathophysiology of vasoplegia.With the return of blood to the systemic circulation after passing through the oxygenation membrane during cardiopulmonary bypass, there is a depletion of the endogenous ligands of the alpha-1 adrenergic receptors, vasopressin-1 and the type-1 angiotensin receptor, in addition to the downregulation of the receptors themselves, making vasoconstriction an event that is unlikely to occur.The reduction in receptor concentration leads to a reduction in intracellular calcium and with it, a reduction in contraction.In addition, CPB induces the production of reactive oxygen species (ROS) and releases inflammatory mediators that can lead to the desensitization of adrenoreceptors and induces the production of nitric oxide (NO).NO leads to an increase in cGMP, which inhibits calcium in cells, leading to muscle relaxation.NO also activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.NO further reacts with the superoxide anion radical (O2 − ) to form peroxynitrite (PN). Figure 3 . Figure 3. Pathophysiology of vasoplegia.With the return of blood to the systemic circulation after passing through the oxygenation membrane during cardiopulmonary bypass, there is a depletion of the endogenous ligands of the alpha-1 adrenergic receptors, vasopressin-1 and the type-1 angiotensin receptor, in addition to the downregulation of the receptors themselves, making vasoconstriction an event that is unlikely to occur.The reduction in receptor concentration leads to a reduction in intracellular calcium and with it, a reduction in contraction.In addition, CPB induces the production of reactive oxygen species (ROS) and releases inflammatory mediators that can lead to the desensitization of adrenoreceptors and induces the production of nitric oxide (NO).NO leads to an increase in cGMP, which inhibits calcium in cells, leading to muscle relaxation.NO also activates ATP-sensitive potassium channels (KATP), leading to hyperpolarization and inhibition of vasoconstriction.NO further reacts with the superoxide anion radical (O 2 − ) to form peroxynitrite (PN).
10,416
2023-10-01T00:00:00.000
[ "Medicine", "Biology", "Engineering" ]
Isolation and characterization of β-transducin repeat-containing protein ligands screened using a high-throughput screening system β-transducin repeat-containing protein (β-TrCP) is an F-box protein subunit of the E3 Skp1-Cullin-F box (SCF) type ubiquitin-ligase complex, and provides the substrate specificity for the ligase. To find potent ligands of β-TrCP useful for the proteolysis targeting chimera (PROTAC) system using β-TrCP in the future, we developed a high-throughput screening system for small molecule β-TrCP ligands. We screened the chemical library utilizing the system and obtained several hit compounds. The effects of the hit compounds on in vitro ubiquitination activity of SCFβ-TrCP1 and on downstream signaling pathways were examined. Hit compounds NPD5943, NPL62020-01, and NPL42040-01 inhibited the TNFα-induced degradation of IκBα and its phosphorylated form. Hence, they inhibited the activation of the transcription activity of NF-κB, indicating the effective inhibition of β-TrCP by the hit compounds in cells. Next, we performed an in silico analysis of the hit compounds to determine the important moieties of the hit compounds. Carboxyl groups of NPL62020-01 and NPL42040-01 and hydroxyl groups of NPD5943 created hydrogen bonds with β-TrCP similar to those created by intrinsic target phosphopeptides of β-TrCP. Our findings enhance our knowledge of useful small molecule ligands of β-TrCP and the importance of residues that can be ligands of β-TrCP. Introduction Ubiquitination of substrates occurs between ubiquitin and the target protein [1]. This type of modification affects many cellular processes, including growth factor-induced endocytosis, DNA repair, and chromatin modification/ transcriptional regulation [2]. E3 ligases are essential in the selection of substrates. Ubiquitin modification induces multiple types of cellular signaling; therefore, proper substrate selection ensures correct ubiquitin signaling [3]. The Skp1-Cullin-F box complex (SCF) is one of the classical types of E3 enzymes whose substrate specificity is determined by F-box proteins. As one of the F-box proteins, β-transducin repeat-containing protein (β-TrCP) is encoded by two different genes (FBXW1 and FBXW11), with indistinguishable biochemical identities in mammals [4,5]. β-TrCP, a substrate acceptor, along with CUL1, RING protein RBX1, and SKP1, forms the activated SCF β-TrCP E3 ligase, essential for regulating protein levels of many key targets [6]. It recognizes the consensus degradation motif DSGφXS disruption motif (φ and X represent hydrophobic and arbitrary amino acids, respectively). The binding of β-TrCP depends on the phosphorylation of the two serines of the disruption motif, which links phosphorylated signaling to ubiquitination and destruction of proteins [7]. Examining the IκBα ubiquitination using TR-FRET, GS143 was identified as an inhibitor of β-TrCP, which suppressed β-TrCP dependent IκBα ubiquitination. GS143 inhibited the TNF-α-induced IκBα degradation and the NF-κB downstream reactions, excluding p53 and β-catenin responses. It is a potent inhibitor of the NF-κB signaling pathway [8]. In this study, GS143 was used as a positive control. In this study, we established a high-throughput screening system to screen β-TrCP ligands. Hit compounds were analyzed for their SCF β-TrCP inhibitory activities. The hit compounds may be used for PROTAC construction in the future. In vitro ubiquitination assay Components of SCF complex (SKP1, Cul1, Rbx1, and hemagglutinin (HA)-tagged β-TrCP) were expressed in HEK293T cells. Anti-HA antibody was added into the SCF complexes expressed HEK 293T cell lysate and rotated at 4°C for 1 h. Aliquots of this mixture were added to protein G beads and rotated at 4°C for 30 min. After washing the beads in cell lysis buffer three times, the tube was centrifuged at 20,400 × g for 5 min and the supernatant was discarded. The protein on the beads was used as recombinant SCF β-TrCP complexes. Aliquots of SCF complexes bound on the beads were then added to ubiquitination reactions (10 mM Tris pH = 7.5, 5 mM MgCl 2 , 1 mM DTT, 1 mM ATP, 20 mM creatine phosphate, 0.1 mg/mL creatine kinase, 5 mM NaF, 1 mM Na 3 VO 4 , 2 mg/mL ubiquitin, 0.5 μg E1 enzyme, 2 μg E2 enzyme) in a final volume of 30 µL. Compounds of 100 µM final concentration were added to reactions, and GST-Wee1KR was added as substrate 15 min later. The reaction mixtures were incubated at 37°C for 2 h and were further examined by immunoblotting. Analysis of protein degradation For examining the effect on degradation of IκBα, HeLa cells were cultured in the presence or absence of the compound for 30 min. TNF-α (Cat. No. TNA-H4211, Acro Biosystems, Newark, DE, USA) was added (40 ng/mL) and incubated for another 5 min. The cell extracts were analyzed using immunoblotting using antibodies against IκBα and phosphorylated IκBα (p-IκBα). For examining the effect on degradation of β-catenin, HeLa cells were cultured in the presence or absence of the compound for 3 h. The cell extracts were examined by immunoblotting using antibodies against β-catenin and phosphorylated β-catenin (p-β-catenin). Luciferase reporter assays HeLa cells were seeded on 96-well plates. Subsequently, TCF and NF-κB transcription activity was examined by the luciferase assay system. T-cell Factor (TCF) transcription activity was assessed by TOP/FOP-Flash luciferase reporter assay, using a pRL-SV40 vector as an internal control. TOP FLASH contains three TCF transcription factors and a firefly luciferase reporter with a minimal promoter and an upstream of the luciferase reporter gene. FOP FLASH is similar to TOP reporter; however, it has three mutated TCF-binding sites and is used as a negative control. The average ratio of TOP to renilla luciferase divided by FOP was used as TCF activity. TOP flash (Sigma-Aldrich, Temecula, CA, USA) or FOP FLASH reporter plasmids (Sigma-Aldrich) and pRL-sv40 renilla luciferase reporter (Promega) were co-transfected into HeLa cells using Lipofectamine 3000 reagent kit (Thermo Fisher Scientific). Furthermore, 24 h after transfection, cells were cultured with the compounds of different dilutions for another 24 h. The cells were lysed, and the luminescence was measured using the Luciferase Assay System (Promega). A minimum of three samples were taken for each sample. For NF-κB transcription activity, PGL-4.32 contains five copies of the NF-κB response element, which is responsible for the transcription of the luciferase reporter gene, with pRL-SV40 as the internal control. NF-κB transcription activity was determined by firefly luciferase expression and normalized by renilla luciferase level. pGL-4.32 reporter plasmids (Promega) and pRL-sv40 renilla luciferase reporter (Promega) were co-transfected into HeLa cells using Lipofectamine 3000 reagent kit. Subsequently, 24 h after transfection, cells were cultured with the compounds of different dilutions for another 24 h. Furthermore, TNF-α was added, and the plates were cultured for another 4 h. The cells were lysed, and the luminescence was measured using the Luciferase Assay System. A minimum of three samples were taken for each sample. Molecular docking simulation The 3D structures of the compounds were obtained from the NCBI PubChem database (https://pubchem.ncbi.nlm.nih.gov/). The SDF file is transformed into PDB by PyMol software, and the Autodock tools-1.5.7 tool converts it into PDBQT format. The crystallographic data of β-TrCP1 (PDB code 1P22, PDB format) were obtained from Protein Data Bank and Chimera 1.16 software. The PDBQT files were created using the Autodock tool 1.5.7. Molecular docking simulation proceeded by executing the autodock vina algorithm. A total of 150 flexible docking runs, an energy range of 5, and 20 number modes were set, and the affinities (Kcal/mol) were calculated. Consequently, all the figures were made using PyMol software. Statistical analysis Luciferase reporter assay was examined in triplicate. The data were presented as the average ± S.D. GraphPad Prism 9.4.1 software (Macintosh Version by Software MacKiev 1994-2022 GraphPad Software, LLC) was used to evaluate the statistical differences among the groups by one-way analysis of variance (ANOVA). Results Establishment of the high-throughput system and first screening for β-TrCP ligands C-IκB-PP15 is a phosphopeptide whose sequence is derived from IκB that is known to bind to β-TrCP. C-IκB-s15 is its non-phosphorylated version, which cannot bind to β-TrCP and was used as a negative control. They were both bound to the maleimide-activated plate using N-terminal cysteine. Express SF+ cell lysates expressing mAG-β-TrCP1 were added into the 96-well plate. After enough binding and washing, the bound mAG-β-TrCP1 was quantitated using spectrofluorometry. The postwash value indicates the binding of mAG-β-TrCP1 and C-IκB-PP15. When the small molecules compete with the phosphopeptides for mAG-β-TrCP1 binding, there was less plate fluorescence (Fig. 1A). The free target phosphopeptide competitively inhibited the binding of the mAG-β-TrCP1 to the plate-fixed phosphopeptide. The efficacy of the system was confirmed by mixing serially diluted phosphopeptide into the cell lysate (Fig. 1B). To identify potent ligands of β-TrCP, 5,000 RIKEN Natural Products Depository (NPDepo) compounds were examined using this high-throughput screening system in a final concentration of 0.2 mg/mL and 109 small molecules were identified as initial positive compounds inhibiting binding by >~20% (Fig. 1C). Then, 109 compounds were re-screened at a lower concentration (0.2 mM), and 13 hits were obtained. Furthermore, 372 derivatives of these 13 initial hits and 27 derivatives of known inhibitor GS143 were examined. Finally, 23 positive NPDepo compounds (13 hits + 10 positive derivatives) and 13 positive GS143 derivatives were identified from the highthroughput system. The secondary screening-in vitro ubiquitination assay As the secondary screening, the activity of hit compounds was examined using the SCF β-TrCP1 ubiquitination activity in vitro at a final concentration of 100 µM. The kinase-negative mutant GST-Wee1 KR was used as the substrate of SCF β-TrCP1 . Effect of the hit compounds on Wnt/β-catenin signaling pathway In the classic Wnt signaling pathway, when β-TrCP induced ubiquitination is inhibited, β-catenin accumulates and further activates nucleolar TCF transcription [31,32]. We examined the effects of hit compounds on β-catenin degradation in HeLa cells. HeLa cells were cultured for 3 h with or without the compound. The levels of β-catenin and p-β-catenin were examined in the cell extracts. Proteasome inhibitor MG132 caused the accumulation of β-catenin and phosphorylated form, while the GS143 did not affect βcatenin and p-β-catenin, which was similar to the published results [8]. Among the seven hit compounds identified from the screening above, only GS143 derivatives NPL72038-01 and NPL62039-01 promoted the accumulation of phosphorylated β-catenin. However, they did not significantly affect the β-catenin level (Fig. 3A). Wnt/β-catenin signaling pathway activity was also assessed using TOP/FOP-Flash luciferase reporter assay. The average ratio of TOP to renilla luciferase divided by FOP was used as TCF activity. Wnt3a is the molecule that stimulates classic Wnt signaling [11]. After adding Wnt3a, the level of TCF transcription increased, which confirmed the efficacy of this system (Fig. 3B). Similarly, in the immunoblotting result obtained, GS143 did not affect TCF transcription and was inhibited minimally. NPL72038-01 and NPL62039-01 also did not cause a high increase in TCF transcription (Fig. 3C). Effect of the hit compounds on the NF-kB signaling pathway NF-κB is considered a typical pro-inflammatory signaling pathway, mediated by the effect of pro-inflammatory cytokines like TNF-α. TNF-α induces the β-TrCPdependent ubiquitination and degradation of IκBα and relocates NF-κB into the nucleus [33,34]. The impacts of hit compounds on IκBα degradation were further examined in HeLa cells. Cells were cultured for 30 min with or without the indicated compounds. Then, TNF-α was added and incubated for an extra 5 min. The level of IκBα and p-IκBα was examined in cell extracts. MG132 and GS143 caused the inhibition of the degradation of IκBα and its phosphorylated form, just as published [34]. Among the seven hit compounds from secondary screening, NPDepo compound NPD5943 and GS143 derivatives NPL72038-01, NPL62020-01, and NPL42040-01 increased the level of IκBα and p-IκBα (Fig. 4A). NF-κB transcription activity was also determined by firefly luciferase expression normalized by renilla luciferase level. TNF-α activated the transcription of NF-κB. Among (The data are presented as the average ± S.D. The statistical differences among the groups were evaluated using one-way ANOVA. p-value: **** < 0.0001) (C) Results of initial screening of~5,000 compounds. The effect of each compound (at 0.2 mg/mL) on mAG-β-TrCP1 binding was standardized and is represented as a percentage value of control (100% indicating no inhibition). The compounds that inhibited the binding by >20% were the initial hits. the seven potent compounds from the secondary screening, the treatment of NPD5943, NPL62020-01, and NPL42040-01 reduced NF-κB activation dose-dependently. NPL82037-01 and NPD945 also caused accumulation of TNF-α induced IκBα and p-IκBα level. However, NPL82037-01 caused a high cell death at high concentrations, and NPD945 did not affect NF-κB transcription activity. These two compounds were not regarded as effective in NF-kB signaling pathway. GS143 at 20 μM inhibited the transcription to~40% of that of the control group (DMSO, TNF-α). NPD5943 and NPL62020-01 can inhibit it to approximately 20% of that of the control at 20 μM, which shows better inhibition than GS143 at high concentration (Fig. 4B). The compounds that exhibit inhibitory effect on either the β-catenin or NF-κB pathways illustrate their binding ability to β-TrCP E3 ligase, and can be considered as hit ligands. Therefore, although these compounds (NPD5943, NPL42040-01, and NPL62020-01) do not have noticeable effects on the β-catenin pathway, we still regarded them as the positive ligands because of their significant effects on the NF-kB pathway. However, NPL72038-01 and NPL62039-01 did not exhibit significant effect in either β-catenin or NF-kB signaling pathway; hence, we did not regard them as hit compounds. In silico analysis of the hit compounds The structures of the three hit compounds were further analyzed using in silico analysis to examine the important moieties and explain their activity. Molecular docking and binding site analysis of β-TrCP (PDB:1P22) were carried out with hit compounds using the docking program autodock vina and PyMOL software. The results were achieved by extensively using autodock tools for setting up docking runs. The meta information, including the docking score, is displayed for each pose in an automatically recorded text Reportedly, β-TrCP recognizes a specific motif with phosphorylated serines [7]. Their interface indicated the hydrogen bond (H bond) binding to the phosphorylated site. The amino acids bound to pS33 (Y271, R285, S309, and S325) are marked in a yellow circle, and the amino acids bound to pS37 (S448, R431, and G432) are marked in a white circle (Fig. 5A). We focused on these amino acids for the H bond formation with the compounds. The result shows GS143 created three H bonds with β-TrCP. One binding amino acid of the H bond created by carboxyl groups is similar to β-catenin pS33 binding amino acid (Y271), indicating the importance of the carboxyl groups. By comparing the structures of the three hit compounds, we noticed that the carboxyl groups of the two GS143 derivatives and the hydroxyl group of NPD5943 created H bonds when bound to β-TrCP. The two active GS143 derivatives created more H bonds than GS143, and they have the same binding sites with both pS33 and pS37 (NPL42040-01: Y271, S325, S448, R431; NPL62020-01: Y271, S325, S448). Among them, NPL42040-01 is the most effective in immunoblotting results; it created a total of six H bonds in the binding, and four of their binding residues belong to both pS33 and pS37 (Fig. 5B). To further explore the importance of carboxyl groups, we analyzed another negative ligand (NPL72029-01) that does not have the carboxyl groups. The result shows that NPL72029-01 did not create any H bonds to β-TrCP, which indicates the importance of carboxyl to β-TrCP binding. Discussion A significant level of SCF β-TrCP is available in the cells, showing enormous potential to be a target of PROTAC [31]. β-TrCP can potentially be a competitive target of the PROTAC technique. A sensitive and rapid screening system may provide more possibilities in the synthesis. In this study, we established a high-throughput screening system by constructing recombinant baculoviruses to express fluorescent fusion protein mAG-β-TrCP1 and identified the ligands of β-TrCP with it. The high-throughput screening revealed 36 compounds that show competitive binding activity to β-TrCP. In vitro ubiquitination assay re-examined these compounds; seven were obtained as hit compounds. The effect of the hit compounds on substrate β-catenin and IκBα degradation was further examined to reflect the competitive binding activity to β-TrCP. Results revealed that a majority of the hit compounds caused marked intracellular accumulation of IκBα and its phosphorylated form, similar to the effect of the proteasome inhibitor MG132. Three of the hit compounds showed an inhibitory effect on TNF-α induced NF-κB transcription in a dose-dependent manner, which proved their effectiveness in the NF-κB signaling pathway. As GS143 is a specific inhibitor of β-TrCP [8], NPL62020-01 and NPL42040-01 were also predicted to be specific to β-TrCP. As for NPD5943, we examined its binding to another F-Box protein of SCF-type E3 ligase, SKP2. As expected, NPD5943 did not bind to SKP2 at all (data not shown). In addition, because β-TrCP binds to phosphorylated peptides, it is possible that its ligand can bind to other proteins that bind to phosphorylated proteins. The polo-box domain of Plk1, 14-3-3 proteins, and Pin1 are examples of such protein. When we examined the binding of NPD5943 to such proteins, we did not observe any binding (data not shown). These results suggest that the binding of NPD5943 to β-TrCP is highly specific. We also examined the non-specific cytotoxicity of NPD5943, NPL62020-01, and NPL42040-01. We found that these compounds showed cytotoxicity only at high concentration (data not shown). As inhibition of β-TrCP by siRNA did not show any cytotoxicic effects on HeLa cells, these results also support the fact that these hit compounds do not have any non-specific inhibition to other proteins. A majority of the hit compounds that affect the degradation of IkBα did not result in the accumulation of βcatenin, the E3 of which is identical to the SCF β-TrCP1 of IκBα. Moreover, the two hit compounds that inhibited the p-β-catenin degradation did not increase β-catenin itself, indicating that only a small population of β-catenin was increased by these compounds. We speculate that this increase may not be sufficient to promote TCF transcription. Thus, we could not find compounds that inhibited the β-catenin degradation. Corroborating with findings of a previous study, GS143 also showed similar behavior that explicitly targets IkBα degradation, excluding β-catenin [8]. GS143 was identified by examining the IκBα ubiquitination using TR-FRET. This effect of GS143 was more likely to be a physical interaction with p-IκBα and SCF β-TrCP1 , the disruption of which inhibits IκBα ubiquitination [8]. However, in this study, these compounds were screened using binding and in vitro ubiquitination assays, which confirmed their competitive binding activity to β-TrCP. We hypothesized that such an effect may be because their chemical moieties result in different binding characters. GS143 has been reported as a novel NF-κB signaling inhibitor with great potential in the treatment of airway inflammation in asthmatic patients [34]. Here, we found more potent compounds that function better than GS143 in the inhibition of TNF-α induced IkBα degradation and the activation of NF-κB. The hit compounds obtained here showed potent effect in NF-κB signaling, and did not increase the level of β-catenin, which can be candidate compounds with anti-inflammatory and anti-tumor properties. This possibility can be explored in future studies. In silico analysis summarized the important moieties and explained why the three hit compounds are active. According to the results, when GS143 binds to β-TrCP, the carboxyl moiety of GS143 has two hydrogen bonds; one had the same occupation as the amino acid of β-TrCP that was created with β-catenin pS33. The carboxyl moiety of the two GS143 derivatives also created hydrogen bonds when bound to β-TrCP, while the negative compound without carboxyl FIGURE 5. In silico analysis of the hit compounds. (A) The interface of phosphorylated β-catenin (pS33, pS37) and β-TrCP: β-TrCP is shown in red, with its side chains in pink, and the phosphodegron peptides of β-catenin are in yellow (phosphate groups on serines are shown in green and red). White dotted lines represent H bonds. Amino acid residues linked to pS33 of β-catenin and those to pS37 are marked in yellow and white circles, respectively. These phosphoamino acids of β-catenin, pS33 and pS37 are marked in yellow and white boxes, respectively. The original figure from [7] was modified. (B) The interface of hit compounds and β-TrCP: β-TrCP is shown in light blue. When amino acids shown in (A) create an H bond with the compound, they are shown in the same color as in (A). (C) The interface of negative compound (NPL72029-01) and β-TrCP: β-TrCP is shown in light blue. The structure of NPL72029-01 is shown on the right. groups did not. These results may explain its importance in these GS143 derivatives. Hydroxyl groups of NPD5943 also play an important role in H bond creation. These residues should be protected in the synthesis of PROTAC. All the two active GS143 derivatives have binding residues that belong to pS33 and pS37. We presumed GS143 derivatives with more H bond occupations of pS33 and pS37 binding amino acids may show high activity in β-TrCP binding. In summary, we developed a flexible, sensitive and, specific system that can be used to screen β-TrCP ligands. The potent ligands for β-TrCP were finally obtained as expected, which can be used as tools for PROTAC construction in the future. PROTAC technology has significant advantages over conventional drugs, and has shown great potential as a therapeutic and biological tool. This study provides a new prospect of the target E3 ligase selection and promising binding ligands of PROTACs construction. The potent ligands can be used as elements of the PROTAC complex that recruits β-TrCP to the protein of interest, which may prove great potentiality in clinic therapy in the future.
4,942
2023-07-21T00:00:00.000
[ "Chemistry", "Medicine" ]
Kierkegaard on Hope and Faith : Faith, hope and love have often been classed together in the Christian tradition as the three “theological virtues”. Kierkegaard does not use that label for them, but he does have a good deal to say about all three. This paper starts by examining hope, arguing that there is an Aristotelian-style virtue relating to hope (a mean between wishful and depressive thinking) and that Kierkegaard could consistently recognize it as a secular virtue. However, his main discussions of hope as a positive state are in a religious context and relate it closely to faith and love; proper hope is a work of love and grounded in faith in God. I then argue that Kierkegaard’s understanding of faith, hope and love is, in many respects, close to Aquinas’ understanding of them as theological virtues (which differs in important ways from Aristotle’s account of a virtue) and that, therefore, it is appropriate to see Kierkegaard’s religious thought as lying within the tradition of virtue theory. The main difference between Aquinas and Kierkegaard here is that the former has an intellectualist and propositional account of faith which contrasts with the latter’s affective and existential view of it. This means that hope and love are both closer to faith for Kierkegaard than they are for Aquinas, meaning that he has a tight account of the unity of the theological virtues. I conclude by discussing how both faith and hope operate as antidotes to despair in The Sickness Unto Death. Faith, hope and love have commonly been classed together in the Christian tradition as the three "theological virtues", this classification being mainly based on the authority of St Paul: "And now faith, hope and love abide, these three, and the greatest of them is love". 1 Kierkegaard has a good deal to say about all three as crucial elements in the Christian life.In this paper, I will look at his account of hope and also at its relation to faith-though, it will turn out that he sees both as intimately related to love also.The question of how hope and faith relate for Kierkegaard is posed in an interesting way by The Sickness Unto Death. 2 This is a book about despair, and it might intuitively seem natural to think of hope as the opposite of despair.And Mark Bernier has indeed argued that, for Kierkegaard, "despair is essentially an unwillingness to hope" (Bernier 2015, p. 81).But, in The Sickness Unto Death, Kierkegaard says relatively little explicitly about hope, while he claims that despair-considered to be an ontological state, rather than a mood-is equivalent to sin (Kierkegaard 1980a, pp. 77 and ff) and that faith is the opposite of sin and therefore of despair (Kierkegaard 1980a, p. 82).But, if despair is defined in opposition to both faith and hope, then, as Bernier says, "it is unclear precisely how faith and hope relate to one another.Is one the necessary condition for the other?Is one of them superfluous?"(Ibid, p. 80) One might add other possibilities; perhaps, for instance, they are really just the same thing, seen from different perspectives?But, in order to address the question of how faith and hope relate, we first need to consider what faith and hope are. Hoping Virtuously I will start with hope (and will spend most of my time on it).Hope has, until recently, been a rather neglected topic in the secondary literature on Kierkegaard (but see now, for instance, Bernier 2015; Lippitt 2015;Fremstedal 2019Fremstedal , 2020)).One might say that it has been rather neglected in the philosophical literature generally.It isn't usually treated as a virtue (or, at least, an important one) in traditions other than Christianity 3 , and, despite some recent discussion, it hasn't been considered much in contemporary secular virtue ethics (though, see, e.g., Bovens 1999;Milona 2019;Pettit 2004;Snow 2013).Whether or not hope (or hopefulness) should be considered a virtue, it is undoubtedly a psychological state, and it is important to be clear about the relationship between hope as a putative virtue and as a psychological phenomenon. 4Considered in the latter sense, it is generally agreed that hope is characteristically directed toward future events (or at least to events whose outcome is unknown to the hoper).According to what has become known as the "Standard Account" 5 , to hope for something is to desire a particular (unknown) outcome while considering it neither certain nor impossible.There is some dispute about how to spell out the second clause: is it really hope if I think the outcome overwhelmingly likely, even if not quite certain? 6It seems generally agreed that it is still hope in the opposite case, where the desired outcome is hugely improbable, but not quite impossible, although there is debate about whether this ("hope against hope" as it is rather oddly called 7 ) is either epistemically or morally legitimate. 8As for the first clause, "desire" can mean anything from a passionate craving to a slight preference for one outcome over another in a matter about which one cares very little.Some have thought that to hope for something must involve more than a mere idle wishing for it, that hope must require some sort of substantial emotional investment in or some degree of active attention to the issue.There may be no determinate answer to these questions; it seems generally agreed that both clauses of the Standard Account are necessary for hope, while it is disputed whether they are jointly sufficient.We may have to accept that hope is a family resemblance concept with vague boundaries, which may be used differently-in more minimal and more expansive senses-in different contexts. It is, however, interesting that the term "hope" can be used to express both confidence and a lack of it.Compare, "Yes, I'm very hopeful that she'll make a full recovery" with "Well, I hope she'll recover, but I don't know.We need to prepare for the worst".It is in the former, positive sense that we typically talk about someone being "hopeful", whether about a specific case or generally.Someone who has hopefulness as a general character trait does not just hope for a lot of things in the Standard Account sense. 9(I call that "hopeSA" in what follows).We all have hopes in that sense, insofar as we care at all about uncertain outcomes.Rather, the "hopeful" person is someone who tends to look on the bright side, who tends to be optimistic. 10In this sense, hope goes beyond the Standard Account and is an attitude which tends to expect the uncertain outcome it desires to happen; or, even when it is not directed to some particular outcome, expects things in general to work out well.Let us call hope, in this sense, "hopeE" (for expectancy); one may have hopeE for some particular case, but hopefulness as a character trait is a tendency to have hopeE as a sort of default setting as one's attitude to future outcomes. 11 Hopefulness, in this sense, may not itself be a virtue, but there does seem at least to be a virtue relating to it.According to Aristotle's account, a virtue is a disposition in respect of a state such as fear or appetite that is a mean between extremes of excess and deficiency (Aristotle 2019, pp. 26-35 (Bk II, 5-9)).And it seems that one can be both excessive and deficient in hopeE.Wishful thinking involves supposing that something unlikely is in fact likely to happen just because one wants it to. 12On the other hand, what we might call "depressive thinking" involves supposing that nothing good could happen (or happen to me or us), even though it is quite plausible that it might.And-this is a related but not, I think, identical phenomenon-there is fearful thinking, where one is inclined to think that the bad things one can envisage are more likely to happen than they are. 13If there are these dubious extremes, it seems there ought to be a mean between them, which would be the virtue of hoping well.Interestingly, Kierkegaard, in an early Upbuilding Discourse, "The Expectancy of Faith" considers versions of the two extremes."The cheerful disposition that has not yet tasted life's adventures. . .expects to be victorious without a struggle."(Kierkegaard 1990a, pp. 19-20).This is contrasted with the attitude of "the troubled person" who "expects no victory; he has all too sadly felt his loss."(Kierkegaard 1990a, p. 20).Kierkegaard then considers what at first appears to be the mean between them, exemplified in "[t]he man of experience [who] frowns on the behaviour of both of these."(Kierkegaard 1990a, p. 20).This character considers both attitudes to be exaggerations.Rationally, one should expect sometimes to get what one wants and sometime not, and avoid excess of both optimism and pessimism: "in happiness one ought to be prepared to a certain degree for unhappiness, in unhappiness to a certain degree for happiness."(Kierkegaard 1990a, p. 20).This looks very sensible at first sight, but for Kierkegaard, such worldly wisdom remains unsatisfying.The young optimist might be reconciled to the idea that he may lose some things, but what about what really matters to him, "that one good that he could not lose without losing his happiness, could not lose to a certain degree without losing it totally" (Kierkegaard 1990a, p. 21)?Kierkegaard is suggesting that what lies behind this worldly wisdom is the cautious advice that, because anything might be lost, one should only care about anything "to a certain degree".One should, so to speak, diversify one's portfolio of emotional investments, so that one's losses will always be moderate and likely to get recouped by gains elsewhere. 14Kierkegaard does not explicitly mention the response to this worldly wisdom of the "troubled person" who, we might suppose, has lost what was, for him, "that one good"; but one may suspect that he would find cold comfort in the suggestion that some other gains might compensate him, "to a certain degree", for his loss. This analysis of the options from early in Kierkegaard's work seems, in a way, to be repeated in Sickness, where the despair of the one whose sense of possibility is unconstrained by a sense of necessity 15 is contrasted with the determinist or fatalist who is crushed by a sense of necessity which closes off all possibility.(Kierkegaard 1980a, pp. 35-37;37-42).But both are further contrasted with the "philistine-bourgeois mentality" which "thinks that it controls possibility, that it has tricked this prodigious elasticity into the trap or madhouse of probability" (Kierkegaard 1980a, p. 41).This worldly wisdom always seeks to calculate probabilities and tries to limit and adjust emotional commitments accordingly.From all of this, it might seem that Kierkegaard rejects the idea that there is an Aristotelian-style mean in respect of hope.What he goes on to recommend in the Discourse is something apparently quite different-the expectancy of faith, "which certainly does surpass even youth's most joyous hope" (Kierkegaard 1990a, p. 21) and which therefore looks to be an extreme, not a mean.But he quickly goes on to add, "even if not exactly as you suppose."(Kierkegaard,p. 21).The difference is that faith, as opposed to youthful optimism, looks to the eternal and has confidence that, by looking to the eternal, as a sailor looks to the stars from a stormy sea, it can steer a path through whatever the future may bring (Kierkegaard 1990a, p. 19).The object of faith and hope here (and they are not clearly distinguished) is not some particular, contingent, desired-for event, but simply that "the eternal" will stand by and guide the believer, no matter what happens.The youthful optimist "speak[s] of many victories, but faith expects only one" (Kierkegaard 1990a, p. 21).And since the eternal is necessary, while temporal particulars are contingent, "the person who expects something particular can be deceived in his expectancy, but this does not happen to the believer."(Kierkegaard 1990a, p. 23). We will come back to Kierkegaard's religious account of hope, which is at least intimately connected with faith.But it seems there is still more to be said at the secular level.Kierkegaard identified two unsatisfactory extremes (too much and too little hopefulnessE) and a sort of pseudo-mean.But if wishful and depressive/fearful thinking are both vices, 16 can't there be-and shouldn't there not be-a proper mean, even if we stay at the secular level and don't (yet) invoke the eternal?I think we can identify and recognize such a mean, although it is hard to decide what the right name for it is.(I am using "hope" for occurrent states and "hopefulness" for a character trait, and both can be either good or bad.)The virtue concerned with hope, as we can rather awkwardly call it, involves looking clear-headedly at the likelihoods of desired outcomes and refusing to either exaggerate or minimize them, despite the temptations to do so.This virtue has especially to do with outcomes that are in part but not wholly under our control.If they are wholly under our control, we should just bring them about instead of sitting around and hoping for them. 17On the other hand, we can, of course, hope for things that are not under our control or ability to influence at all; and I think there is virtue in avoiding excessive optimism or pessimism in respect to them (a sort of emotional maturity, a sort of honesty).But one can most obviously think of the virtue relating to hope as something that enables one to flourish in cases where one is actively engaged in a pursuit with an uncertain outcome. Here, wishful and depressive/fearful thinking may both lead us astray, the former by making us oblivious to problems and dangers, and the latter by reducing our energy and motivation and making us oblivious to real opportunities.We should, however, note that this virtue is like courage in that it enables one to do better at pursuing one's ends, whatever they may be; this applies to bad ends and projects, as well as to good ones.Though I think there is truth in the doctrine of the unity of the virtues, it should not be taken to deny that there are genuinely courageous (rather than merely reckless) villains.Similarly, a villain may have the virtue in question here.But-again as with courage-that does not mean that it is not a virtue, a genuine human excellence. One might think that the virtue in question involves proportioning one's expectations of an outcome to its objective probability.This should not, however, be confused with the worldly wisdom about which Kierkegaard was so scathing.For one might still ardently desire something one recognizes to be unlikely to happen and may (where this is possible) still actively pursue it.But the person with the virtue mentioned above would do so without illusions.He or she would have hopeSA but not hopeE in such cases.(The virtuous person would sometimes have hopeE, but only when the evidence warranted such confidence.)One problem with this account is the familiar psychological point that overestimating our chances of getting something may encourage us to try harder to do so and thus actually make it more likely that we succeed. 18So, severely tying our expectations to objective probabilities may actually impede our flourishing.Another problem is that, much of the time, it is impossible to precisely calculate what the probabilities of an occurrence are anyway.(Not being a determinist, I think that they are often objectively indeterminate, so this is not just an epistemological limitation.)So, there will often be a wide range of possible expectations that cannot be shown to be objectively unreasonable.Despite this, though, there are plenty of cases where people are pretty clearly engaged in either wishful or depressive/fearful thinking, and so it does still seem reasonable to think that there is a virtue which consists in the ability to steer between those extremes.And I would suggest that we can think of it as a particular adaptation of Aristotelian phronesis, precisely because it is the ability to make the right judgement where there are (often) no exact, quantifiable rules to follow. I noted above that not only can one hopeE for a particular outcome, but that hopefulness can also be a general attitude to life, a positive sense of openness to the future, which expects good things (or things that one can turn to the good-it needn't be merely passive) and so approaches the future with enthusiasm rather than hesitancy or timidity.A hopeful person, in this sense, may plausibly be thought to notice opportunities or possibilities which the glum or fearful might miss, as well as having a generally buoyant, positive attitude to life.This sense is nicely captured in Emily Dickinson's lines: Hope is the thing with feathers That perches in the soul, And sings the tune without the words, And never stops at all. 19 The "tune" (the general attitude) is upbeat; circumstances may give it "words" (a particular content, a specific outcome that is hoped for), but they are taken up into a previously existing generally hopeful approach to life.There seems something attractive, desirable, about such a positive, buoyant attitude to life, although some qualifications are called for.Such an attitude would certainly need to be restrained by the realism and honesty which are needed to avoid it tempting one into wishful thinking in particular cases. There is still the phenomenon of "hope against hope" in cases where one hopes although the outcome is acknowledged as extremely unlikely.Should the virtuous person only hopeSA in such cases (that is, desire and even work for the outcome but without much expectation)?Or is hopeE (the expectation of a positive outcome) legitimate?If they are to be distinguished from mere wishful thinkers, then the putatively virtuous hopers against hope must be under no illusions about the objective improbability of the desired outcome.Nonetheless they hopeE for it.One cannot of course simply choose to believe that X will happen, even though one knows it to be objectively unlikely.But if one finds oneself with a positive feeling, intuition, expectation, despite knowing the objectively poor odds, should one just dismiss such intimations as illusory?This might seem like the case of Abraham in Fear and Trembling, who knows the improbability-even, Johannes de silentio says, the impossibility-of what he desires (that Isaac will live) but still not only desires but believes it."The knight of faith. . .acknowledges the impossibility and in the very same moment he believes the absurd, for if he wants to imagine he has faith without passionately acknowledging the impossibility. . . he is deceiving himself" (Kierkegaard 1983, p. 47).Or it might seem like the situation of the person considered in Sickness who is "brought to his extremity, where, humanly speaking, there is no possibility" (Kierkegaard 1980a, p. 38).Such a person can only be saved from despair if he can come to believe, against all rational calculation, that there is a possibility, a way out of his apparently hopeless situation."This is the battle of faith, battling, madly, if you will, for possibility, because possibility is the only salvation" (Kierkegaard 1980a, p. 38).But, as Kierkegaard is at pains to stress in both books, this is the attitude of faith and, specifically, the faith that, for God, all things are possible.To still have hope even in extreme situations, to believe that there are still possibilities, is not a baseless grasping at straws, since it rests on the belief that the empirical facts are not all that there is. 20Can there be a secular analogue of this attitude?It seems hard to see how there could be.There may sometimes be practical reasons for thinking it psychologically better for people in extreme circumstances to keep hope alive (see Fremstedal 2019), but can that be compatible with truthfulness?If you know that the empirical odds are all against you, and if you don't believe in anything more than the empirical, wouldn't trying to keep your hopes up not amount to self-deception? 21 I think this worry applies generally, not only in extreme cases.I have talked about the desirability of a state of existential buoyancy, which is independent of specific conditions.But isn't that unreasonable, dishonest, even, unless one believes that reality is, at bottom, benign, that it gives one grounds for a generally positive attitude toward life?Aren't philosophers like Schopenhauer, Sartre and Camus, for whom the world is indifferent to human desires, merely consistent in disprizing hope?Or, even if we think they go to excess in their pessimism, shouldn't a consistent secularist at least reject hopefulness as a general attitude to life?Shouldn't one only hopeE in specific cases where the evidence gives good reasons for thinking the hope will be realized?To reiterate, such a person need not be displaying a "philistine-bourgeois mentality", for he or she may combine a deep emotional yearning for an outcome with a tough realism about its likelihood.Indeed, as John Lippitt notes, the Knight of Infinite Resignation in Fear and Trembling combines a passionate emotional commitment to the longed-for goal, with a careful, hard-headed estimation of the (un)likelihood of it being realized (Lippitt 2015, p. 126).The lad who realizes that he will never marry the princess does not deceive himself with wishful thinking any more than he abandons his love.And Abraham's faith is explicitly contrasted with what Johannes de silentio treats as "the despicable hope that says: One just can't know what will happen, it could just possibly be. .." (Kierkegaard 1983, p. 37) Faith goes beyond but includes Infinite Resignation, and Infinite Resignation has already dismissed these "travesties of faith" (Kierkegaard 1983, p. 37).I would suggest, then, that Kierkegaard would not approve of those who, without faith, allow themselves to hopeE beyond what the empirical evidence permits. 22And he would also disapprove of the person who becomes so despondent that he or she fails to see anything positive or any real possibilities.He could, thus, I think, allow for a secular virtue, which is a genuine mean between wishful and depressive thinking, and which keeps emotional commitments and vulnerabilities alive.(The merely "philistine-bourgeois" way of emotionally numbing oneself would be a sort of caricature of the true virtue, not a real mean between the extremes, but a shoddy, lukewarm compromise.)But this secular virtue is not, of course, the attitude that Kierkegaard actually commends.To his account of hope as a specifically religious virtue, I now turn. Kierkegaard on Eternal Hope We have already seen enough to indicate that hope in the sense in which Kierkegaard commends it is at least very closely linked to faith.In Works of Love, he has a discourse on the theme "Love Hopes All Things" (Kierkegaard 1995, pp. 246-63), and in For Self-Examination, he treats hope, faith and love as "gifts of the spirit" (Kierkegaard 1990b, pp. 81-85).In these discussions, hope is specifically Christian hope, and Kierkegaard, like Aquinas, follows the Christian tradition generally by associating it closely with both faith and love.Like Aquinas again, Kierkegaard distinguishes hope as a natural human emotion or psychological state from specifically Christian hope."In every human heart there is a spontaneous, immediate hope [which] can be more robust in one than another" (Kierkegaard 1990b, p. 82), but death sets a final limit for all such natural human hopes.Christian hope is "eternity's hope" that gives us hope even beyond the point at which the natural human understanding thinks all hope ends.Hence, Kierkegaard says, it is "hope against hope", explaining that Pauline phrase as indicating that it is a hope that defies normal human hope, characterized as that is by its limits: "the hope of the life-giving spirit is against the hope of the understanding."(Kierkegaard 1990b, p. 82-83). Hope here pertains specifically to the hope of eternal life beyond death.In Works of Love, the emphasis is somewhat different (though compatible).There, Kierkegaard distinguishes true hope from something "we often call. . .hope that is not hope at all, but a wish, a longing, a longing expectation now of one thing, now of another, in short, an expectant person's relationship to the possibility of multiplicity."(Kierkegaard 1995, p. 250).By contrast, true hope is defined as follows: "To relate oneself expectantly to the possibility of the good is to hope, which for that very reason cannot be any temporal expectancy but is an eternal hope."(Kierkegaard 1995, p. 249).Here, as in For Self-Examination and "The Expectancy of Faith", "eternal" hope is being contrasted with temporal hopes 23 , but Kierkegaard is not in fact denying that true hope has to do with temporality.He rejects the idea that true hope is "an eternal moment", as if it "were at rest, in repose" (Kierkegaard 1995, p. 249).Rather, "to hope is composed of the eternal and the temporal" and "when the eternal is in the temporal, it is in the future."(Kierkegaard 1995, p. 249).According to Sickness Unto Death, a human being is a synthesis of the eternal and the temporal, 24 and for a temporal being, the relation to the good is a continuing task and therefore one which always has the future in view.I cannot avoid the moral challenges that tomorrow may bring either by looking back on my past accomplishments or by resting in a quasi-mystical eternal present.However, we are eternal, as well as temporal, and so too is authentic hope.Worldly "hope" is directed to possibilities that may or may not be realized; true hope, by contrast, "can never be deceived, because to hope is to expect the possibility of the good, but the possibility of the good is the eternal."(Kierkegaard 1995, p. 250).Here, "the eternal" (or simply, God) is the unchanging basis for the possibility of the good being realized in the temporal world-for instance, in my life or yours.For Kierkegaard, it is essential that it is love that hopes all things, which is to say, hopes in the properly Christian sense; only one who loves in the proper way can hope in the proper way.And what the properly loving person hopes for is the realization of the good in all persons.That includes me, of course; and this (aiming at the realization of the good in me) is what proper self-love consists in.What is hoped for is each person turning to the good, relating properly to the eternal and thus achieving his or her telos.And because God is unchangeable and beneficent, there is a firm ground for hope in the realization of the good in this sense. One might wonder whether, strictly speaking, it makes sense to say one should have hope in God.If one person says to another, "I hope you'll do the right thing", that would normally be taken to imply some doubt in the matter; similarly, it might seem, with "I hope God will be gracious to me".For Kierkegaard, God's grace is indeed constant and universal, not arbitrary or changeable.So, the hope is not that God will be gracious (we do not need to hope for that) but that we will realize our telos through responding to that grace.Faith in God, we might say, is the ground for hope for humans.Kierkegaard firmly insists that we should have hope for every person to realize the good, and never give up hope for anyone.The loving person "lovingly hopes that at every moment there is possibility, the possibility for the good for the other person."(Kierkegaard 1995, p. 253).This goes beyond the most basic kind of hopeSA (desire + uncertainty) in that it involves a strong emotional investment and concern, but also in that it is hopeful-trusting that, because of God's grace, and because of the ineradicable longing for the good that exists in all of us 25 , salvation remains a real possibility even for those who seem the worst of people. 26This is true even when the person who seems the worst to me is myself; it is a crucial aspect of the virtue of hope that we should never lose hope for our own salvation.Hence, hope, in this sense-for others and for myself-is a form of hopeE. For Kierkegaard, hope for oneself and hope for others are inseparable since they are both expressions of love."Earthly understanding thinks that one can very well hope for oneself without hoping for others and that one does not need love in order to hope for oneself".But, on the contrary, "without love [there is] no hope for oneself; with love [there is] hope for all others-and to the same degree that one hopes for oneself, to the same degree one hopes for others, since to the same degree one is loving."(Kierkegaard 1995, p. 260).Proper hope for oneself is hope for one's salvation, for one's coming to rest transparently in God.Selfish hope for oneself (like Cesare Borgia's hope to become "Caesar" by gaining military and political power in Renaissance Italy-see Kierkegaard 1980a, p.19) wasn't real hope for himself (or an expression of real self-love), as it didn't pertain to the realization of his telos of a proper God-relationship.When we are commanded to love our neighbours as ourselves, it is implied that we need to love ourselves in the right way-that is, to desire and work for the realization of the good in us, for a proper relationship to God (Kierkegaard 1995, pp. 22-23).Proper self-love is thus inseparable from the love of God (without which I cannot fully be a self 27 ), and it is inseparable from the love of other people, for to love God and to love oneself aright is to love all creation and desire it to reach its telos. Hope, when considered in this sense, is the basis for what I called above a general attitude of "existential buoyancy", a generally hopeful attitude toward the future.The faith that "for God all things are possible" (Kierkegaard 1980a, p. 38) should not be confused with the naïve providentialism which thinks of God "like the fond father who indulges the child's every wish" (Kierkegaard 1980a, p. 78) so that, with God's help, I can count on things always turning out the way I would like.(Wishful thinking doesn't become any more virtuous for being linked to an immature religiosity.)The hope is, rather, that whatever particular circumstances I meet with-desired or otherwise-can become occasions for strengthening my selfhood and my relation to God.What, though, is the relation between this religious hope and the particular hopes we do of course have for particularly worldly goods? 28Though it isn't hard to find in some of Kierkegaard's rhetoric a tone of pietistic distain for worldly goods, it wasn't until the end of his life that this attitude took him over.Works of Love does not, when properly understood, reject friendship, romantic love or other "preferential" loves; 29 similarly, his insistence that true hope is for people to attain their telos (the proper God-relationship and, ultimately, eternal life with God) does not mean that Kierkegaard thinks that there is anything wrong with hoping for particular worldly goods for ourselves or for others.It would indeed be unloving not to do so; and his praise of those who love without being able to provide any practical assistance does not mean that Kierkegaard thought that such assistance, when it was practicable, was unimportant. 30 Still, for Kierkegaard, the most important thing for anybody is for that person to overcome despair by becoming fully a self through relating properly to God.And so, that is what someone who loves that person (whether the person is myself or another) should primarily hope for for the person in question.Whether, apart from that, a person achieves some goal that he or she supposes will make him or her happy is a secondary-which is not to say an unimportant-matter.And, from Kierkegaard's perspective (or Aquinas' or Aristotle's), one can only know what will really benefit someone (and therefore what one should hope for for them) if one understands what their ultimate telos is.Becoming "Caesar" is not what someone who really loves Cesare Borgia should hope for for him. Mark Bernier argues that Kierkegaard's view is that "For authentic hope to become realizable, earthly hope needs to die." (Bernier 2015, p. 115).However, Bernier claims that this is not as other-worldly or ascetic as it sounds.What is meant by earthly hope here is a sort of higher-order hope that getting the particular temporal goods one wants will bring fulfilment and will unify the self.This, for Kierkegaard, is an illusion.But this is not to say either that our particular worldly hopes will fail; or that they may not be proper in their own right, or that they will not bring us any satisfaction if they are realized.(So, it is not a Schopenhauerian pessimism according to which hopes as such are delusive.)But insofar as our particular hopes are legitimate, they need to be reconfigured in a different outlook from that of "earthly hope"; they need to be oriented by eternal hope.And, according to Bernier, eternal hope-the hope for the good-is also a higher-order attitude, since what it is directed toward, "the possibility of the good. . . is something of a higher-order possibility that manifests within temporality through other lower order (mundane) possibilities."(Bernier 2015, p. 141).To hope for the good is not to hope for one more thing, like a promotion at work or a new car; it is an overall orientation of one's life which gives a criterion for which particular hopes are legitimate and which are not, for how intently I should pursue them, for how I should respond if they are disappointed, etc. Kierkegaard and Aquinas on the Theological Virtues Is hope (in the sense outlined above) a virtue for Kierkegaard?Although he does not use the language of virtues much, I think Kierkegaard is, in effect, a virtue theorist. 31 For Aristotle, a virtue is a good character state; it is a disposition, so it is a stable and enduing state, not a passing mood or emotion.It reliably produces good actions in the relevant circumstances.It is a state that is necessary for persons to achieve their goals (their non-trivial ones, at least), and, more fundamentally, it is necessary for persons to reach their telos-to achieve eudaimonia (flourishing) through the realization of their distinctively human potentialities.A virtue is a human excellence, and, as such, it is necessary for eudaimonia, not as a means to a distinct end, but as a constituent of it.To properly develop the virtues is itself part of what it is to flourish as a human being (see Aristotle 2019, pp.1-47 (Bks I.1-III.5)).Hope for Kierkegaard is certainly a stable disposition (as are the love and faith which are inseparable from it); the loving person hopes all things and hopes always (Kierkegaard 1995, p. 248).Hope expresses itself concretely in our temporal lives; not always in overt actions (Kierkegaard 1995, p. 258) but certainly in those where possible and appropriate.Hope plays an integral role in attaining the human telos (though, of course, Kierkegaard understands what that is very differently from Aristotle).And it isn't simply a means to a distinct end; hope is a necessary expression of love, and to love properly is our telos. Despite these structural similarities, hope, faith and love for Kierkegaard are distinctly unlike Aristotelian virtues in several respects.(And, of course, Aristotle himself did not consider them to be virtues.)Firstly, faith, hope and love all characterize the self as it stands in a specific relationship (to God) rather than simply being dispositions internal to the subject, as e.g., temperance and courage are.One might think of justice in Aristotle as an essentially relational virtue, but it is the disposition to behave justly to those with whom one interacts, whoever they are; it is not essentially about the relationship to a particular other.(Of course, love and hope are virtues that we are supposed to exercise in dealing with our "neighbours"-that is, with anybody we encounter; but they are grounded in and defined by our relationship with the one specific "Other"-God. 32) Secondly, Kierkegaard clearly does not think of them as means, relative to vices of excess and deficiency.One can never have too much faith, too much love or too much hope.(Even if one's hope for the salvation of a depraved person was disappointed and that person was "eternally lost" (Kierkegaard 1995, p. 262) that would not, Kierkegaard insists, mean that the hope in question had been "put to shame", shown to have been excessive.We should always keep hoping and hope for everybody (Kierkegaard 1995, pp. 260-63).)Finally, the virtues, according to Aristotle, are developed in us through good upbringing, although we also have the responsibility to further train ourselves in them through performing the actions that a virtuous person would do until they become habitual to us (Aristotle 2019, pp. 21-26 (Bk II, 1-4)).But for Kierkegaard, as we have seen, faith, hope and love are "gifts of the spirit"-which does not, however, mean that we don't need to work at developing them. So, hope, faith and love are not strictly Aristotelian virtues.But that doesn't mean that we should not think of them as virtues at all.Aquinas gives a broadly Aristotelian account of what he calls "natural" virtues, but he gives a significantly different account of faith, hope and love (charity-caritas) as "supernatural" or "theological" virtues.They are still, Aquinas insists, virtues, since they are habits or dispositions that are necessary to our achieving the happiness or beatitude that is our telos (ST, I-II, LXII, 1).But, in addition to the "natural" happiness that can be comprehended by reason alone, we are intended by God for a further and higher happiness, which is the vision and love of God (ST I-II, LXII, 1) The theological virtues differ in kind from the moral and intellectual virtues, since those have objects that can be comprehended by human reason, while "The object of the theological virtues is God Himself" (ST I-II, LXII, 2).They are thus, as noted above, intrinsically relational states and not just relational in general; instead, they are principles of relation specifically to God.The theological virtues are not defined as means between opposing vices in the Aristotelian sense: "we can never love God as much as he should be loved, nor believe or hope in Him as much as we should" (ST I-II, LXIV, 4). 33However, Aquinas does allow that, "in an accidental way and on our part, a mean and extreme can be found in theological virtues" (ST I-II, LXIV, 4).For instance, "Hope is a mean between presumption and despair when considered in relation to us. . .But there cannot be too much hope in relation to God, whose goodness is unlimited" (ST I-II, LXIV, 4 ad 3).Furthermore, unlike the moral and intellectual virtues which are developed by habituation, the theological virtues "cannot be caused by human acts whose principle is reason, but only by divine operation in us" (ST I-II, LXIII, 2).So, in these three ways of differentiating faith, hope and love from Aristotelian virtues, Kierkegaard and Aquinas seem to be in agreement. Aquinas states that "man's happiness or beatitude is of two kinds. . .One kind is proportioned to human nature, which man can arrive at by the principles of his nature.The other kind is a happiness surpassing man's nature, which man can arrive at only by the power of God" (ST I-II, LXII, 1) This might seem to suggest a "two-level" account of the natural and the supernatural-the idea that our nature is complete as it is, and a "supernatural" destiny is gratuitously added onto it by God.However, I think this impression is, overall, misleading.Aquinas does not believe that the "natural" ethical life is satisfactory by itself or that it can bring real happiness.On the contrary, he argues that no purely finite goods can really satisfy us: "nothing can bring the will of man to rest except the universal good.This is not found in any created thing but only in God. ..Hence only God can satisfy the will of man" (ST I-II, II, 8).Furthermore, Aquinas argues that even the moral virtues cannot be had in full measure unless they are directed by love/charity, for that is what inclines us to our ultimate end, and it is only on that basis that I can really exercise prudence (Aquinas' equivalent of Aristotle's phronesis), which is necessary for all the other virtues (ST, I-II, LXV, 2).Kierkegaard's sharp (at times) distinction between the ethical and the religious might tempt us to interpret him as having a two-level view.But, for him, the ethical is in the end unstable and unable to stand alone apart from the religious.Indeed, one could say that the whole of Kierkegaard's pseudonymous literature dealing with the aesthetic, ethical and religious stages of life could be regarded as a commentary on the passage from Aquinas just quoted above (or on Augustine's famous "You [God] have made us for yourself and our heart is restless until it rests in you" (Augustine 1991, p. 3), which was, no doubt, in Aquinas' mind also.).So far, then, it seems that what we have seen of Kierkegaard's and Aquinas' understandings of hope, faith and love are compatible; and so, it seems reasonable to say that, in Kierkegaard's understanding of them, they are (theological) virtues, albeit distinct from Aristotelian virtues in the ways we have seen. Both Aquinas and Kierkegaard see hope, faith and love as very closely connected.However, if we compare their accounts of the interrelation of the theological virtues, some significant divergences between them start to emerge.According to Aquinas, hope and charity are virtues of the will, while faith is a virtue of the intellect (see ST, I-II, LXII, 3)."Faith is a habit of the mind. ..making the intellect assent to what is non-apparent" (ST, II-II, IV, 1).The will plays an important role, since it is the will, rather than reason, that moves the intellect to assent, but the assent itself remains an act of the intellect (ST, II-II, II, 1 ad 3; II-II, IV, 2), and its objects are propositions.Although the object of faith (God) is simple, we know even simple things (at least in this life) only discursively, so via propositions (ST, II-II, IV, 2).However, faith becomes "lifeless", i.e., mere propositional assent, if it is without charity (ST, II-II, IV, 4).Since faith is essentially "a perfection of the intellect" (ST, II-II, IV, 4), faith that becomes lifeless remains the same habit as the formerly living faith (it has not changed its essence), but, severed from its connection with charity, it ceases to be a virtue.To be a virtue, faith, must involve both the intellect firmly holding on to the truth and the will being directed by the love of God to maintain that faith and live it (ST, II-II, IV, 5). Faith has a certain priority over the others since one needs faith in God in order to love Him and hope from Him. "For we cannot tend to something by appetitive movement, whether by hope or love unless it is apprehended by sense or intellect.Now the intellect apprehends by faith what we hope for and love.Hence in the order of generation, faith must precede hope and charity" (ST, I-II, LXII, 4).Since Aquinas also says that the habits of faith, hope and charity are "infused simultaneously" (ST, I-II, LXII, 4), this cannot be a temporal priority; the point is that faith is conceptually a necessary condition for the others.But in "the order of perfection" charity comes first since (as we have just seen with faith) the others "are formed by charity and thus acquire the perfection of virtue" (ST, I-II, LXII, 4).Charity is more fundamental than the others because it is the attitude that seeks or desires God for His own sake, while faith and hope are attitudes to God as providing benefits-truth, in the case of faith; and happiness, in the case of hope (ST, II-II, XVII, 6).So, faith and hope (as we have just seen in the case of faith) can exist without charity, but without it, they are not really virtues (ST, I-II, LXV, 4).Faith and hope have a certain imperfection, since faith believes what it does not see and hope desires what it does not have (ST, I-II, LXII, 3, ad 2).Charity, in this life, needs faith and hope since we do not yet enjoy union with God (ST, I-II, LXV, 5).But there will be no place for faith or hope in heaven (ST, I-II, LXVII, 3-5), while charity will remain there (ST, I-II, LXVII, 6). Hope is a fixed disposition to trust in God for one's future good and, specifically, for one's eternal happiness (i.e., the enjoyment of God) (ST II-II, XVII, 2).Hope depends on faith, since I cannot hope for anything till I know what it is I am hoping for (ST II-II, XVII, 7).And, in a certain sense, hope is prior to charity; I start by obeying God through hope of reward (and fear of punishment), but this self-interested "love" of God turns into charity (pure, disinterested love of God) as I develop spiritually.Thus, "hope leads to charity" (ST II-II, XVII, 8), but in this life, it is not simply replaced by it.The desire is purified, but since what is desired (union with God) still lies ahead, hope remains the appropriate attitude.But, as with faith, it is by becoming infused with charity that hope becomes a real virtue.Interestingly, Aquinas distinguishes hope from love by arguing that the former "regards directly one's own good and not that which pertains to another" (ST II-II, XVII, 3).This appears to be a significant difference between his view and Kierkegaard's insistence that hope for oneself and hope for others are inseparable.However, Aquinas continues: "Yet if we presuppose the union of love with another, a man can hope for and desire something for another man as for himself" including the other's eternal salvation.And "it is the same virtue of hope, whereby a man hopes for himself and for another" (II-II, XVII, 3).This makes the difference between Kierkegaard and Aquinas less stark than it initially appears to be, but there does still seem to be a difference insofar as Aquinas sees hope as being primarily and essentially for oneself. The most obvious contrast between Aquinas' views and Kierkegaard's, though, is in their differing views of faith.Although for Aquinas, merely propositional faith is not a virtue, faith is essentially propositional, and this propositional assent is a precondition of the charity which is needed to turn faith into a genuine virtue.For Kierkegaard, throughout his work, faith is primarily a kind of trust.This has been emphasized by John Davenport, who argues that "[a]s a kind of trust, faith is a practical rather than merely doxastic attitude; the agent does not merely assert the ideas. ..as propositions but stakes the meaning of his life on them". 34Of course, faith for Kierkegaard has content (see, Kierkegaard 1992, I, p. 380) and can even be expressed propositionally ("God is love", "Jesus Christ was both human and divine", etc.).But, according to Johannes Climacus (I am assuming throughout this discussion that Climacus can here be taken as speaking for Kierkegaard), the distinctive claims of Christianity (as distinct from the generic "Religiousness A") are paradoxical, so that the propositions in which they are stated are utterly repellent to the objective intellect.It is not simply that we should (as Aquinas insists) respond to the doctrine of the Incarnation with life-changing love, rather than dispassionate intellectual comprehension; rather, the Incarnation is impossible for us to think through on the intellectual plane; there, we can simply do nothing with it.The only way to grasp it is existentially.This is why "Christianity is not a doctrine but an existence-communication" (Kierkegaard 1992, I, p. 570).The impossibility of thinking through the paradox pushes us away from seeking to understand it (which involves holding it at an intellectualizing distance) and forces us instead to live in the light of it: Suppose that Christianity does not at all want to be understood; suppose that, in order to express this and to prevent anyone, misguided, from taking the road of objectivity, it has proclaimed itself to be the paradox.Suppose that it wants to be only for existing persons, and essentially in inwardness. ..which cannot be expressed more definitely than this: it is the absurd, adhered to firmly with the passion of the infinite.(Kierkegaard 1992, I, p. 214) This still leaves more generic religious claims such as the existence of God (considered abstractly or generally, and not in a specifically Christian sense, as Trinity) and immortality.Aquinas does not consider such beliefs to be the province of faith since he thinks that they can be proved by natural reason.Climacus does not think that the belief in God or immortality is paradoxical, "absurd", as the specifically Christian beliefs are, but he also doesn't think they can be proved; indeed, the very project of attempting to do so is, he thinks, misconceived in principle.Trying to prove immortality is like trying to paint Mars wearing the armour that renders him invisible (Kierkegaard 1992, I, p. 174).Immortality does not, as such, conflict with reason (as the idea of the "god in time" does), but it can still only be grasped first-personally as the prospect (and challenge) that I will live forever.To slip into the mode of asking whether some objectively specified entity called "the soul" can exist apart from the body is to abandon the first-personal stance from which the question of immortality makes sense.Similarly with the existence of God.Kierkegaard is clear that we all have a natural innate sense of, orientation to and need for the eternal, the good-in a word, for God.But to move from this sense of my own existence as bound up with God's in order to adopt the sideways-on stance of objective metaphysical reasoning will not only lead to no decisive conclusion, but will take me away from the knowledge of God that I have in my subjectivity to contemplate a conceptual idol (As Martin Buber would later argue, God can only be apprehended in the mode of I-Thou; God is the only Thou that can never be apprehended as an It. 35). For Aquinas, faith is a virtue of the intellect, and hope a virtue of the will.Kierkegaard makes no use of such a faculty psychology 36 , and his existential rather than intellectual understanding of faith reduces the contrast that Aquinas makes between hope and faith.However, it can still be said that faith is a precondition of hope for Kierkegaard, as for Aquinas, albeit in a rather different way.What Kierkegaard calls "the expectancy of faith" in the Discourse of that name seems hard to distinguish from "true hope" in Works of Love; but faith's expectancy (true hope) is an aspect or an expression of faith itself-it is the attitude that one who has faith in God takes to uncertain future events.In Works of Love, it is faith in God's goodness that is the ground for hope for all persons.According to Gene Fendt, Kierkegaard's view is that Insofar as faith believes God, believes in God and believes that God makes good, it is distinct from hope which is an expectation of the good for both oneself and one's neighbor.But insofar as faith believes that God makes good, it is inseparable from the hope which expects the good for both oneself and one's neighbor.If the first (faith) is given up, then the second (hope for both oneself and others) is ipso facto given up (Fendt 1990, p. 168). I think Fendt is basically right: faith for Kierkegaard is the fundamental trust in God that grounds our hope for ourselves and for others.Hence, faith is a precondition for hope. 37But it is a matter of one affective state following on another, rather than an affective state (hope) following on an intellectual one (faith).Kierkegaard's rejection of the idea of a purely intellectual faith also means that faith (trust) in God is for him an aspect of love for God, rather than an intellectual preliminary to it Faith, as a trust in God is rooted in love for God, and love/faith, in this sense, is the ground for hope for all persons.It is also the ground for the love of one's neighbour (i.e., any and every human being), which is why that is a universal and obligatory, rather than a particular and preferential, love. 38 Kierkegaard thus has a fairly tight account of the unity of the theological virtues. Hope, Faith and Despair I started this essay with the puzzle that, while Sickness Unto Death presents faith as the opposite of despair/sin, it might seem more intuitive to oppose hope to despair in this way.What is (psychologically) called "despair" and (theologically) "sin" is basically an ontological state in which the self fails or refuses to synthesize itself by failing or refusing to relate properly to God.Particular morally wrong acts emerge from this state of psychical or spiritual disharmony (Kierkegaard 1980a, pp. 81-82).The opposite state, "when despair is completely rooted out", is that in which, "in relating itself to itself and in willing to be itself, the self rests transparently in the power that established it."(Kierkegaard 1980a, p. 14).Kierkegaard's concern to understand sin and its opposite as ontological states of the self, rather than simply in terms of good or bad actions, is certainly something that connects him to virtue theory in general.However, as the argument of Sickness continues, Kierkegaard insists that "the opposite of sin is not virtue but faith" (Kierkegaard 1980a, p. 82), and the formula for the state in which despair is rooted out is now explicitly presented as the definition of faith: "Faith is that the self in being itself and in willing to be itself rests transparently in God." (Kierkegaard 1980a, p. 82). 39Kierkegaard's point here is that both the problem and the solution have to be understood in religious, not in merely ethical, terms.I think that we can take "virtue" here to refer to an ethical outlook based on human self-sufficiency, along the lines of the "first ethics" mentioned in The Concept of Anxiety which was "shipwrecked" on the realization of human sinfulness (Kierkegaard 1980b, pp. 17, 20).It attempts, with only limited success, to treat the symptoms of a sickness which it does not really understand.Here again, Kierkegaard is pretty much in agreement with Aquinas, for whom an ethics based purely on the "natural virtues" can recognize neither the depths of our problem (sin) nor the height of our ultimate telos (eternal happiness).This is why, as I noted above, Aquinas doesn't really have a "two-level" theory; for him, even the natural virtues only become fully virtues once they are informed by the theological virtues-primarily, by charity (ST, I-II, LXV, 2). For Aquinas, charity remains in the blessed in heaven, while faith and hope do not.It is interesting that, by contrast, Kierkegaard uses "faith" for the state in which we realize our telos by "resting transparently" in God.But, as noted above, faith in and love for God are so closely interconnected for Kierkegaard that "faith" here must include love also.(The right relation to God is one of love.)But it is curious that Kierkegaard's definition makes faith the achieved state in which "despair is completely rooted out."(Kierkegaard 1980a, p. 14).Faith, so understood, is a great rarity, if it even exists at all in this life.Anyone who is "not wholly [a true Christian]. . .still is to some extent in despair."(Kierkegaard 1980a, p. 22).This makes a striking contrast with Luther, for whom we have faith despite being sinners.For, if despair is sin, someone in whom despair is "completely rooted out" is no longer a sinner.Moreover, faith understood in this way seems to be different from faith in Kierkegaard's other works.The trusting faith in God discussed in "The Expectancy of Faith" and which underlies "love's hope" in Works of Love seems to be something which ordinary imperfect persons can still exhibit."Faith" in Sickness is our telos and since, as we have seen, true hope for Kierkegaard is the hope that we will reach our telos; we could say that the proper object of hope is that we will come to have faith.But, in "The Expectancy of Faith" and Works of Love, faith is the presupposition of hope.So, unless we take Sickness to be repudiating the teaching of the earlier works, we would seem to have a vicious circle.I think the problem is that Kierkegaard is using "faith" in two different senses. 40To make his overall position consistent, we need to distinguish between Faith (with an uppercase "F") as the achieved state of synthesis and faith (lowercase "f") as the stance of those struggling to reach that state."Faith" then appears as the telos of the faith of the still imperfect. 41 If Faith is the telos, then faith, hope and love-inextricably interconnected, as we have seen above-are the primary virtues which we need to struggle toward Faith, or, to put it another way, to root out the despair that lurks in all of us sinners. We can now, I think, see how to resolve the issue from which I started: how to understand the relation of faith and hope in Sickness.In ordinary usage, to despair is to experience all of one's present circumstances and future prospects as bleak, as offering no likelihood of one's attaining the goods for which one yearns.Hope thus looks like the opposite to or antidote for such a state.But, for Kierkegaard, despair is an ontological state of disharmony in the self, of which the self may not be aware, or only dimly aware, and it might seem much less obvious that hope as such is the opposite of despair in this sense.However, hope, as we have seen, is for Kierkegaard the hopeE that I (like everyone else) will achieve my telos of self-unification through resting in God, and this is based on faith (trust) in God.And hope, in this sense, is indeed essentially opposed to (Kierkegaardian) despair and is necessary to resist it.So, the despair of weakness (see Kierkegaard 1980b, pp. 49-67) needs to be combatted by the hopeE that one will achieve Faith.But while the weak despairer plausibly has too little hope, it would seem wrong to say that the one who despairs in defiance has too much hope.(We do not have an Aristotelian mean structure here.)It would be better to say that the actively defiant person (Kierkegaard 1980a, pp. 68-70) has the wrong kind of hope (in him-or her-self) while the passively defiant person-who has lost hope in his or her own powers but refuses any way out of despair that would require humbly relying on another (ultimately, of course, on God)-could also be said to have too little hope or, rather, to have deliberately refused hope (see Kierkegaard 1980a, pp. 70-73).So, again, to avoid despair-whether that of weakness of defiance 42 -we need to hope in the right way, that is, to hope for the achievement of the good in ourselves through trust in God.But this hope is not something that can be separated from faith or love. Funding: This research received no external funding. Conflicts of Interest: The author declares no conflict of interest.Notes 1 I Corinthians, 13: 13 (NRSV). 2 I shall throughout be treating this work as only "weakly" pseudonymous and as a reliable expression of Kierkegaard's own views. 3 There is some discussion of it in Plato's Philebus, and Aristotle considers it in relation to courage (Aristotle 2019, pp. 48-50 (Bk III.7)).I am not aware of discussions of hope as a virtue in Confucian ethics.4 Aquinas has distinct discussions of hope as a "passion" (ST I-II, XL) and as a theological virtue (ST II-II.XVII-XXII).References to Aquinas' Summa Theologica are given using the conventional system of citation by Part, Question, Article and, (where relevant) reply.So, e.g., ST, I-II, LXII, 3, ad 2 refers to the First Part of the Second Part (Prima Secundae), Question 62, article 3, reply to the second objection.This system of reference applies to any edition; I have used the translations from the edition (Aquinas 1964-75) translated by the Fathers of the English Dominican Province (London, Eyre and Spottiswood, 1964-75) except where I have used the translations by John Osterle of the extracts from the Summa that he has published as (Aquinas 1983) and (Aquinas 1984).5 See https://plato.stanford.edu/entries/hope/(accessed on 31 July 2023) for a useful overview.6 Aquinas thinks not; he argues that hope must be for what is believed to be a good that is possible, but "arduous and difficult to obtain" (ST I-II, XL, 1). 7 The phrase is derived from St. Paul, who uses it of Abraham in Romans 4: 18. 9 I leave it open whether that requires just the two basic clauses noted above, or some sort of further emotional investment as well.In what follows, I will use "hopefulness" for this general trait and "hope" to refer to a particular occurrent state.However, "hope" has also become firmly established as the name for the theological virtue which I discuss later in the essay, and I will be using the term in that sense there.I hope that the context will make clear the sense in which I am using "hope" in each case.11 This distinction between hopeSA and hopeE has some similarities with, but differs from, Pettit's distinction between "Superficial" and "Substantial" hope.His "superficial hope" is, basically, hopeSA, but his "substantial hope" is a consciously willed strategy: "Hope will consist in acting as if a desired prospect is going to obtain or has a good chance of obtaining" (Pettit 2004, p. 7). 12 Or, at least, thinking it more likely than a cool objective assessment of probabilities would suppose.13 Note that the fearful and even the depressive thinker can still hopeSA-that is, desire an outcome that is supposed to be neither certain nor impossible.It would at least be a very extreme kind of depressive thinking that had lapsed into such apathy as to cease to even desire anything or to think that literally nothing that was desirable had any chance at all of happening.14 Hence, John Lippitt has compared this "man of experience" to the "frogs in life's swamp" in Fear and Trembling who encourage the lad who is hopelessly in love with a princess to forget her and consider "the rich brewer's widow" as a better match.See (Lippitt 2015, p. 126;referencing Kierkegaard 1983, p. 35). 15 This character might be the naïve young optimist, who is not aware of being in despair or might be someone who is consciously in despair at being lost in possibilities which are never actualized. 16 It might be, though, that to call such states "vices" is excessively moralistic and condemnatory.(One should sympathize with the depressive, and the wishful thinker might, in some circumstances, seem more comical, or even charming, than depraved.And, indeed, Kierkegaard does seem to take those attitudes to the two characters he imagines; he is much harsher to the "man of experience").But wishful and depressive/fearful thinking certainly seem to be undesirable character traits, inimical to human flourishing, and to that extent they fit the philosophical concept of a vice.Insofar as they tend to lead the one who has them away from a truthful estimate of the probabilities of events, they can certainly be considered intellectual vices.And, as such, they have a moral component, since the intellectually vicious tend to make misjudgments that may bring harm to others, as well as themselves.And there are certain forms, at least, of depressive and wishful thinking-an attitude (on the one hand) of wallowing in gloom and using it as an excuse for apathy, and (on the other) of feckless irresponsibility-that do seem to be appropriate targets for moral condemnation.17 Of course, I may properly hope that I can pass an exam or run a marathon, but in such cases, I am hoping that I do in fact have the abilities needed.It would-normally-be odd for an able-bodied person to say "I hope for a drink of water" when there is a full glass of it just in front of him or her. 18 See Bovens (1999).It should be said that this point should not be taken too far; it won't help if we overestimate our chances of success so much that we think we do not even need to bother to make much effort. 19 From https://poets.org/poem/hope-thing-feathers-254, accessed on 24 August 2023.20 See Wittgenstein: "To believe in a God is to believe that the facts of the world are not the end of the matter."(Wittgenstein 1979, p. 74). 21 Pettit thinks one can deliberately choose to act as if an outcome was more probable than it is and claims that this need involve no self-deception: hopers "set themselves to act and react as if things were otherwise than the evidence suggests they are or as if they were more firmly established than the evidence shows.But people can do this quite openly and honestly."(Pettit 2004, p. 10).Such a person would be, as it were, intermediate between the hero who still struggles for an outcome without hopeE for it and the person who lapses into wishful thinking.I must admit that I am not really convinced that it is possible to form plans and attitudes as if something were more likely than it is without becoming deceived about that likelihood. 22 Of course, people who have such irrational hopes may find themselves unable to shake them off, even when they recognize them as irrational.And this might lead some of them to question their starting assumption that the empirical facts are all that there is.In other words, such "irrational" hopes might be taken as intimations of something beyond the empirical, which would provide some grounding for hopes that would not make sense in purely empirical terms.For some suggestive, if opaque, reflections on this theme, see Marcel (1978). 23 Hopes wrongly so called, of course, according to Kierkegaard here; however, in For Self-Examination, he does not deny that "the hope of the understanding" really is a sort of hope. Full selfhood is only realized through the self resting transparently in the power that established it, i.e., God.See Kierkegaard (1980a, p. 14).28 Kierkegaard, as we saw, claimed that one should not really use the term "hope" for "an expectant person's relationship to the possibility of multiplicity", but we typically do, and even Kierkegaard himself was not, as we have seen, consistent in sticking to his own stipulation about how to use the word. 29 As was once widely supposed; see, e.g., George (1998).For arguments that Kierkegaard is not hostile to friendship or romantic love as such, see Ferreira (2001, Chaps. 3-6;Krishek 2009;Lippitt 2013, Chap. 4); however, these authors disagree amongst themselves about how exactly Kierkegaard understands the relation between preferential and neighbour love.See the discourse "Mercifulness a work of love even if it can give nothing and is able to do nothing" (Kierkegaard 1995, pp. 315-30).But this discourse takes it as unproblematically true that those who can help should do so.As Lippitt notes, "the centrality of the parable of the Good Samaritan to [Works of Love] makes clear" that love requires us to provide practical, physical help to others where we can (Lippitt 2013, p. 77). 31 I am thus siding with Robert Roberts (see Roberts (2022) for a thorough account of Kierkegard as a specifically Christian virtue ethicist) and against Sylvia Walsh, who argues that Kierkegard's radical Lutheran theology puts him outside the virtue ethical tradition (see Walsh 2018; see also Dalsgaard 2015).32 One might say that there is a virtue of marital love (Aristotle does not; Kierkegaard does not say so directly, but I think one can see the letters of Judge William as attempting to delineate such a virtue), which would have essentially to do with one's relationship to a single other (one's spouse).But, of course, for each different individual, the virtue would concern that individual's relationship with a different person, so the virtue as such is not defined in terms of any one specific individual.Faith, hope and love are defined essentially as virtues of a relationship with one specific Other. 33 Although we should note that, for Aristotle, virtues, while means in one sense, are extremes in another (Aristotle 2019, pp. 29 and 34 (Bk II, 6, 9)); one can never be too courageous, though one can be too ready to rush into danger.34 Davenport (2008, p. 206).Davenport is primarily concerned with Fear and Trembling in this essay but takes himself-rightly, I think-to be explicating Kierkegaard's own view of faith.See also (Davenport 2015). 35 Buber (1958, p. 112).Whether all metaphysical theorizing must take this sideways-on form-and whether classical metaphysics has always so taken it-are further questions.For Plato, the assent through the Forms to the ultimate Form of Goodness/beauty is driven by eros, a passionate personal need and desire.For an account of how Aquinas' "Five Ways" to show God's existence can be seen as "directions for the mind in meditation", rather than dispassionate objective proofs, see Ward (2002, pp. 54-56).36 I think, though, that many of Aquinas' insights can be translated out of this unhelpful framework.37 Bernier also sees faith as a sort of precondition for hope, but in a rather different way.He holds that "faith is a willingness to hope, wherein the self secures a ground for the possibility of hope" (Bernier 2015, p. 212), But this perhaps suggests too narrow a view of faith, and "willingness to hope" may make it sound too voluntaristic.38 This is why it is a commanded love-see Kierkegaard (1995, pp. 17-44).Does Kierkegaard's emphasis here on the injunction "you shall love" mean that he is committed to a "divine command theory" of morality?Stephen Evans has argued that Kierkegaard does have a divine command theory, but not of the classic Ockhamist kind (see Evans 2004).According to Ockham's view, God's commands are ultimately inscrutable, and I obey them simply out of love for God (this is the one "natural" non-commanded element of Ockham's ethics, as it needs to be on pain of circularity).So, according to this view, my (commanded) love of neighbour is based directly on my love of God.But Kierkegaard, on Evans' view, does not think God's commands are inscrutable or arbitrary; they are based on God's essential nature as Goodness itself.He commands us to love other people because He has created them as beings of intrinsic value, which are thus worthy of love.If we were perfected saints, we would be motivated to love others simply by our perception of their worthiness; as sinners, though, we need the obligation laid upon us by a divine command to give us a further source of motivation.Though I am generally sympathetic to Evans' view, I have argued (Rudd 2015) that the intrinsic worthiness of others creates the obligation to love them, without further need for explicit divine commands.But that worthiness is itself derived from God's goodness, which is the basis for our love of God.39 Note that the vague "the power which established" the self from the first formulation has been replaced by the specific "God".40 Indeed, Kierkegaard's usage is not even consistent within Sickness.In a remark I quoted above, he describes the struggle to believe that, for God, all things are possible as "the battle of faith, battling madly, if you will, for possibility" (Kierkegaard 1980a, p. 38).But "faith" here, which is in the thick of the struggle against despair, cannot be identical with the Faith that is the achieved state of synthesis. 41 In For Self-Examination, Kierkegaard imagines himself in conversation with Luther, admitting that he does not have faith but still describing himself as a "believer" (Kierkegaard 1990b, pp. 17-18).This is essentially the distinction that I am making for Kierkegaard here, though it has to be said that it does not cut much ice with the imagined Luther (Kierkegaard 1990b, p. 18). 10 30
16,543.8
2023-11-24T00:00:00.000
[ "Philosophy" ]
Scientometric Analysis of Research Performance of African Countries in selected subjects within the field of Science and Technology . This paper assessed the performance of African countries in selected field of Science and Technology (S&T) over the last twenty years. The purpose was to determine the readiness of these countries in aligning to the strategic direction set by African Union (AU 2063) Agenda. The AU 2063 aims to emplace a par-adigm shift from the current structure where its members ’ dependents on natu-ral resources to drive their economies to one that is knowledge-based. It thus set pillars for archiving this feat and they include; building and/or upgrading research infrastructures; enhancing professional and technical competencies; promoting entrepreneurship and innovation; and providing an enabling environment for STI development in the African continent. Data used for the study were retrieved from the SCImago database which comprises a total of Seven (7) subject areas cutting across one hundred and twenty-six (126) subject catego-ries. In SCImago database, information was also searched on S&T performances with respect to publications in the World and Africa, over the last 20 years period starting from 1996-2015. Microsoft Excel was used to analyse the data collected. Results were presented in tables and figures on the top 10 most productive African countries in the field of S&T in all the seven selected subject areas. The paper suggested an intra-African collaborative effort between low and high performing countries in Africa as an option for developing the needed knowledge capacities for realising its regional developmental Agenda (AU 2063). Introduction African leaders have seen the need to emplace the continent on a pedestal aimed towards self-reliance capable of promoting economies of its member states that is more sustainable and in tune with what is obtainable in the developing world. In 2014, to re-affirm its vision of "an integrated, prosperous and peaceful Africa, an Africa driven and managed by its own citizens and representing a dynamic force in the international arena", the African Union under its AU Agenda 2063 recognized Science, Technology and Innovation (STI) as multi-functional tools and enablers for achieving continen-tal development goals, hence, initiated the Science, Technology and Innovation Strategy for Africa 2024 (STISA-2024). The STISA-2024 is the first of the ten-year incremental phasing strategies to respond to the demand for science, technology and innovation to impact across critical sectors such as agriculture, energy, environment, health, infrastructure development, mining, security and water among others. The strategy is firmly anchored on six distinct priority areas that contribute to the achievement of the AU Vision. These priority areas are: Eradication of Hunger and Achieving Food Security; Prevention and Control of Diseases; Communication (Physical and Intellectual Mobility); Protection of our Space; Live Together-Build the Society; Wealth Creation. The strategy further defines four mutually reinforcing pillars which are considered as prerequisite conditions for its success. These pillars include; building and/or upgrading research infrastructures; enhancing professional and technical competencies; promoting entrepreneurship and innovation; and providing an enabling environment for STI development in the African continent. It anticipates that continental, regional and national programmes will be designed, implemented and synchronized to ensure that their strategic orientations and pillars are mutually reinforcing, and achieve the envisaged developmental impact as effectively as possible. Every positive-oriented society today needs skilled and talented individuals to generate new ideas, products, processes and commercial enterprises. Therefore, existing studies have shown that accessing performance on the basis of STI, African countries performance is rated poorly if measured on indicators as tertiary education institutions, intellectual property and innovativeness and productivity and competitiveness [1]. This position was also supported and explained by the United Nations Economic Commission for Africa (UNECA) in its African Science, Technology and Innovation Review 2013 report document. The review was done to assess STI status and performance in the African context with a view to describing the innovation ecosystem in Africa. It looks at the innovation value chain from the perspective of training and research and development; technology development, acquisition, use and application. In the last decade, Africa has recorded an annual growth rate of about 15 percent in terms of enrolment rate in tertiary institutions while in 2008, the figure for Sub-Saharan African countries on this same indicator was only 6 per cent which is lower when compared with statistics on other continents Asia (26%), Latin America and the Caribbean (38%). Furthermore, in terms of researchers involved in R&D, Africa performance is still relatively poor. For instance, in a survey conducted in 13 countries by African Science, Technology and Innovation Indicator Initiative (ASTII), the result shows that more than half of these countries have fewer than 1000 R&D researchers in total. However, only Gabon, Senegal and South Africa have more than 20 per cent of their total R&D personnel with PhD qualifications while Mozambique and Kenya reported less than 2 percent for this indicator [2]. To ascertain this claims, different tool for assessing performance and productivity of a system like Scientometrics can be employed. Though there are other tools for assessing scientific production, however, scientometric is very useful for this purpose. In the field of Science and Technology Studies (STS), Scientometrics is a useful tool for measuring the scientific and technological performance of a knowledge system. Scientometrics is done as a measurement of scientific publications using a method referred to as Bibliometrics [3]. Scientometrics is restricted to the measurement of science communications, whereas Bibliometrics is designed to deal with more general information processes [4]. Scientometrics is for science what econometrics is for economics [5]. The advent of journal Scientometrics in 1978 from a research unit in the Hungarian Academy of Science and Scientific conferences, led to the development of Scientometrics as a discipline [6]. They stated that it was developed around one core notion (citations) though the discipline can study (to some extent) many aspects of the dynamics of science and technology. The citation is not only important in Scientometrics but provide a quantitative metrics for measuring research impact. Mingers and Leydesdorff further buttress this position and stated that "The act of citing another person's research provides the necessary linkages between people, ideas, journals and institutions to constitute an empirical field or network that can be analyzed quantitatively". This paper seeks to use Scientometrics to analyze research performance of African countries in selected subjects within the field of S&T. Methodology The research was designed based on the need to find the best approach that could lead to a logical route to addressing the objectives of the research. The focal objective of this study was specifically to study how African countries perform in S&T over the last twenty years (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015). To this end, the research design approach upon which this study was built, rests on the previous research works of [7] and [8] where in both cases, Scientometrics analysis of publication output on S&T in India between 1989-2014 and 1996-2011 respectively were carried out by these scholars. Therefore, in this study, the sample population from the SCImago database used in this study comprises a total of Seven (7) Data Analysis and Discussion The ten most productive countries in Africa in the field of Science are shown in Table 1. Their corresponding ranking in the world is also shown to reflect their position beyond the continent. South Africa is ranked first in Africa and 34 th position in the world having produced 188104 documents out of which 91.66% of it is citable. Ranked second in Africa is Nigeria with a world ranking of 52 nd position, a wide margin from that of South Africa. Nigeria produced 59372 documents between the years under review out of which 95.38% of them is citable. Nigeria, including South Africa and Tunisia, records a high percentage of self-citation in the region having over 21%. Egypt supposed to stand at the second position in Africa considering its position of 42 nd in the world, however, following the ranking list as obtained from the SCImago, the country was not included on the list. In terms of H-index, South Africa has the highest ranking followed by Kenya and Nigeria. Interestingly, in terms of citation per paper, Kenya recorded the highest score in this category despite its 6 th position in Africa. This shows that despite the low volume of documents produced during the period under review, it was able to attract attention within the academic community. Overall, the Northern African countries prove to be very strong in the production of scientific knowledge in Africa having displayed more countries from the region according to the ranking. The performance of these North African countries may be as a result of their collaboration with their fellow countries in the Arab region like Saudi Arabia and the Emirates where they also receive grants to promote their research activities. The overall performance of Africa as ranked in the world calls for improvement and the need to address those challenges that researchers in this part of the world face which directly impact on the number and quality of publications from the region. In the field of Agricultural & Biological Science South Africa and Nigeria still maintained the top two positions in Africa. Nigeria has the highest percentage of citable document (99.51%) as shown in Table 2 and is closely followed by Ethiopia which records 99.10%. South Africa has the highest case of self-citation (28.75%) followed by Ethiopia (25.37%) and Nigeria (24.34%). In terms of citation per document, Kenya tops this section having recorded 13.9% citations per document produced. Kenya has also performed well as indicated by the H-index having 103 behind South Africa which is ranked the first position in the field of Agricultural & Biological Science. According to the data presented in Table 3, recording a total of 18946 in terms of document produced, South Africa ranked the highest position within the field of Biochemistry, Genetics and Molecular Biology between the periods under review. In terms of quality of documents produced, there is a fair distribution in the percentage of citable documents across all the 10 countries between the ranges of 98.74% being the highest from Nigeria and to the lowest coming from South Africa (96.57%). With 20.06%, Nigeria tops other countries in the case of self-citations. This is not to its credit as citations should be assessed from other researcher and not a function of authors citing their own work so as to boost its citation counts. The case of self-citation in Nigeria calls for concern if considering that despite being ranked 2 nd in the continent in terms of number of documents produced, it has the lowest (7.64) citation per document and highest percentage of self-citation so far. Overall, South Africa's productivity in this subject field can be said to be balanced, having received the highest H-index in this category, over a double figure for the rest of the countries in the ranking. Research in the field of Chemical Engineering shows that South Africa tops the ranking list in the publication figure in Africa (Table 4). Unlike in the other subject categories considered earlier, there is a departure from the usual in the percentage of citable documents produced where Sudan has 100% of its documents citable. Cameroon recorded the highest percentage of self-citation of 19.65% followed by South Africa. In terms of citations per documents, Morocco recorded the highest figure in the region. Chemical Engineering is an important field that plays a significant role in the production of chemicals for industries alike, Africa's research in this direction is commendable. Source: SCImago, Author analysis, 2017 Computer Science as a field of Science is very important in the world today. Virtually all human activities are dependent on one form of technology or the other. Over the years, Asian countries have built capacities and enforce their superiority in the field of Information and Communication Technology (ICT) over other developing countries. A look at figures in Table 5 shows that Africa's productivity in this subject area is still dominated by South Africa with a total of 10644 documents produced. The North African countries appear to be more formidable in this subject field having displaced Nigeria to the 5 th position in the ranking. In terms of citations per paper, Kenya standing at the 8 th position is closely ranked with South Africa having received 3.59 citations per document. Considering the total figure of documents produced, Africa researchers need to improve on their publication activity within this subject field since the relevance of Technology cross-cut all sectors of human endeavour today. In today's world, Engineering concepts and applications have continued to react to the dynamics of the society. Either in Construction, Design, Machine fabrication or Industrial input, Engineering is an important field that is as old as humanity itself. South Africa is still the dominant country in this field has produced a total of 19163 documents so far ( Table 6). The North African countries (Tunisia and Algeria) are closely ranked after South Africa in terms of documents produced, citations per document and even in H-index received. Worthy of note here is that Tunisia and Algeria have more cases of self-citation in the region, an indication that is not favourable to the quality of publications. Source: SCImago, Author analysis, 2017 Research in the field of Material Science is also important to a nation's technological development. It connects with the industries in terms of quality of material resources needed for production. Besides, the engineering field also relates with this field as a form of support for the production of technology-oriented outputs needed as inputs in other sectors of the economy. Table 7 shows South Africa still topping the chart in Africa having produced a total of 10956 documents. Algeria, Tunisia and Morocco are next ranked to South Africa. Notably in this field is the introduction of Cote d'Ivoire and Senegal to the table for the first time even though they occupy the bottom position in the ranks. West African countries are more engaged in research in this field of science. In terms of self-citation, the North African countries recorded the higher percentage in this. In Africa, Medicine is a field that still needs improvement in terms of research and human capacity development. Africans are the most travelled for medical attention in the world presently, according to the statistics on Medical tourism. Mostly, the destination is to Asian countries especially India, and some other countries like Saudi Arabia, Germany, Israel etc. It can be deduced that in the field of medicine, as shown in Table 8, aside South Africa, research into this field is relatively low in West Africa countries. Considering the quality of publication among authors from these countries in the region, South Africa, Kenya, Nigeria and Uganda received higher H-index over all other countries. Conclusion The purpose of this paper which is to assess research performance of African countries in selected fields of S&T with respect to seven subject areas has been undertaken and with revealing inferences. Relating this outcome to realizing the AU 2063 Agenda by member countries, there is a ray of hope in its attainment. Although more commitment in the area of research and funding is needed. A particular case is that of the Medicine field where most of the citizens of countries like Nigeria and others still embark on medical tourism to Asia and other European countries. Although the case of South Africa is different from that of other Africa countries in this regard. The country has capacities and physical infrastructure to attend to medical issues of his citizens, hence record low figure in medical tourism. South Africa tops the chart of the most productive countries in Africa in all the S&T field and occupy a position of 34 th in the world. A closer look on the country next to South Africa, which is Nigeria, occupy 52 nd position in the world. It can be deduced from the outcome that countries like South Africa, including some North African countries like Morocco, Tunisia, Algeria, etc., enjoy adequate funding and maintain a clear strategic direction towards aligning their national developmental priorities to their research orientation. Besides, they have been able to structure and functionalize their National Innovation Systems (NIS) such that industrial needs informs their research priorities and knowledge acquisition. In conclusion, the overall performance of African countries as it concerns this paper is promising and could be said to align towards realizing the regional goal. However, there is need for more coordinated and collaborative effort across the regions where it seems to be more productive. To this end, intra-African collaboration that is geared towards promoting knowledge development between researchers from low and high performing countries in Africa should be encouraged.
3,869.6
2018-01-26T00:00:00.000
[ "Economics", "Engineering" ]
A Topic-aware Summarization Framework with Different Modal Side Information Automatic summarization plays an important role in the exponential document growth on the Web. On content websites such as CNN.com and WikiHow.com, there often exist various kinds of side information along with the main document for attention attraction and easier understanding, such as videos, images, and queries. Such information can be used for better summarization, as they often explicitly or implicitly mention the essence of the article. However, most of the existing side-aware summarization methods are designed to incorporate either single-modal or multi-modal side information, and cannot effectively adapt to each other. In this paper, we propose a general summarization framework, which can flexibly incorporate various modalities of side information. The main challenges in designing a flexible summarization model with side information include: (1) the side information can be in textual or visual format, and the model needs to align and unify it with the document into the same semantic space, (2) the side inputs can contain information from various aspects, and the model should recognize the aspects useful for summarization. To address these two challenges, we first propose a unified topic encoder, which jointly discovers latent topics from the document and various kinds of side information. The learned topics flexibly bridge and guide the information flow between multiple inputs in a graph encoder through a topic-aware interaction. We secondly propose a triplet contrastive learning mechanism to align the single-modal or multi-modal information into a unified semantic space, where the summary quality is enhanced by better understanding the document and side information. Results show that our model significantly surpasses strong baselines on three public single-modal or multi-modal benchmark summarization datasets. INTRODUCTION The rapid growth of the World Wide Web has led to the flood of information across the Internet [18,39,52].On content websites such as CNN.com, Twitter.com, and WikiHow.com,there are often corresponding images, videos, and side text along with the main document, which can attract readers' attention and help them understand the content better [7,36,44,49].Herein, we regard the auxiliary images/videos/text as side information.Since the side information frequently make reference to the article's main content explicitly or implicitly, such information can also be used to improve summarization quality, as shown by the two examples from CNN and WikiHow Apps in Figure 1.There is also other side information in real-world applications such as citation papers, summary templates, and reader comments, which are helpful for summarization [15].It is thus desired to extend text-based summarization models for taking advantage of the summarization clues included in such side information. There are previous works exploring utilizing side information from a specific domain.For example, Narayan et al. [35] first proposed to utilize image captions to enhance summarization performance.Other textual side information such as citation papers [1], reader comments [14], user queries [20], prototype templates [13] are also utilized in summarization tasks.Recently, the benefits of visual information on summarization have also been explored.To name a few, Zhu et al. [57] incorporated multimodal images and Li et al. [22] utilized videos to help better summarization.These works are typically designed for one specific modality of side information, while a more generally useful summarization framework should be able to process different modalities of information in a flexible way.Hence, in this paper, we target to address a general summarization framework that can flexibly unify different modal side information with the input document to generate better summaries. There are two main challenges in this task.The first challenge comes from the different modalities of side information.Regardless of the presented format of side information, a summarization model needs to align and unify it with the document into the same semantic space.The other challenge lies in the fact that the side inputs can contain information from various aspects, and the model should recognize the aspects useful for summarization.In the first case in Figure 1, only if the summarization model can connect the visual information "earth" and "launching" to the textual information can it generate the informative summary.In the second case in Figure 1, the query describes the question from computer and safety aspects, which should be the focus when making a summary. In this work, we propose a Unified-Modal Summarization model with Side information (USS) to tackle the above challenges.Firstly, we propose to use topics as the bridge to model the relationship between the main document and the side information.Topics are a subject or theme of documents or videos, and traditional works employ topics as cross-document semantic units to bridge different documents [9].Moreover, we observe that topics can also be an information bridge for multi-modal inputs.For instance, in the first case in Figure 1, we can use topics "aerospace" and "nature" to relate the videos with the summary text.Hence, in this work, we expand the topic modeling from single-modal to multi-modal for unifying the main document and various types of side information.For the second challenge, apart from the limited side-document pairs, we utilize rich non-paired side and document inputs in the collected datasets, and propose a cross-modal contrastive learning module to align the main document and side information into a unified semantic space.Concretely, in our model, we first introduce a unified topic model (UTM) to learn the latent topics of the target summary by using the main document and the side information to predict the topic distributions of the summary.Since UTM aims to predict the topic distribution of the target summary, it does not rely on the specific modality attributes of the input.Based on the learned topics, we construct a graph encoder to model the relationship between the main document and side inputs.In this topic-aware graph encoder, we let information from two sources flow through different channels, i.e., by direct edges and indirect edges through topics.In the decoding process, we propose a hierarchical decoder that attends to multi-granularity nodes in the graph guided by the topics.Moreover, the triplet contrastive learning mechanism pushes the paired document and side representations closer and unpaired representations far away from each other, so as to enhance the model's capability of understanding the main document and side information. Our contributions can be summarized as follows: • We propose a general summarization paradigm that can take advantage of different types of side information in a flexible way to enhance summarization performance. • To model the interaction between various inputs and unify them into the same semantic space, we propose a unified topic model and a triplet contrastive learning mechanism. • Empirical results demonstrate that our proposed approach brings substantial improvements over strong baselines on benchmark datasets. RELATED WORK Summarization with Side Information.Simply relying only on the main body of the document for summarization cues is challenging [23,32,54,56].In fact, articles in real-world applications often have side information that is beneficial for summarization.A series of works utilized textual side information such as image captions [35], questions [10,11,20], prototype summaries [14], citation papers [1,8], timeline information [6], and prototype templates [13].Recently, research on multimodal understanding gets popular, and the benefits of using visual information on summarization have also been explored.Gao et al. [15] provided a survey on side informationaware summarization.Side information-aware summarization can also be regarded as a kind of multi-document summarization.Cui and Hu [9], Zhou et al. [55] introduced topic and entity information in the summarization process, respectively.Different from previous works which either take visual or textual side input, we propose a general framework that can be flexibly applied with different types of side inputs. Topic Modeling Neural topic modeling (NTM) was first proposed by Miao et al. [33], which assumes a Gaussian distribution of the topics in a document.Fu et al. [12], Liu et al. [26], Xie et al. [47], Yang et al. [50] further explored it in the summarization task in the text domain.Specifically, Cui and Hu [9] employed NTM to jointly discover latent topics that can act as cross-document semantic units to bridge different documents and provide global information to guide the summary generation.Liu et al. [25] proposed topic-aware contrastive learning objectives to implicitly model the topic change and handle information scattering challenges for the dialogue summarization task.In this work, we come up with a unified topic model to fit in the unified-modal setting, which requires discovering latent topics beyond single-modal text input. Contrastive Learning.Contrastive learning is used to learn representations by teaching the model which data samples are similar or not.Due to its excellent performance on self-supervised and semi-supervised learning, it has been widely used in natural language processing.Lee et al. [21] generated positive and negative examples by adding perturbations to the hidden states.Cai et al. [5] augmented contrastive dialogue learning with group-wise dual sampling.It has also been utilized in caption generation [31], summarization [4,13,25,29], dialog generation [16], machine translation [3,51] and so on.In this work, we use contrastive learning to unify multimodal information in the summarization task. MODEL In this section, we first define the task of unified summarization with side information, then describe our USS model in detail. Problem Formulation Given the main document and its side information , we assume there is a ground truth summary = ( 1 , 2 , . . ., ).To be specific, the document is represented as a sequence of words ( 1 , 2 , . . ., ).The side information can be in textual or visual formats.For textual side information, it is represented as ), and for visual side information, we use to denote the images. and are the word number or the image number in a document or side information.Given and , our model generates a summary Ŷ = { ŷ1 , ŷ2 , . . ., ŷ ŷ }.Finally, we use the difference between the generated and the gold Ŷ as the training signal to optimize the model parameters. Overview Our model is illustrated in Figure 2, which follows the Transformerbased encoder-decoder architecture.We augment the encoder with a unified topic modeling network ( § 3.3) which learns the latent topic representations from source inputs and target summary, based on which a topic-aware graph encoder ( § 3.4) builds graphs for the document and side input, and models their relationship through the learned topics.Correspondingly, we design a summary decoder ( § 3.5) which generates the summary with a topic-aware attention mechanism.To better represent the representations from different spaces, we also design a triplet contrastive learning module ( § 3.6) to align the paired multimodal information into the same space. Unified Topic Modeling We first use a unified topic model (UTM) to establish the relationship between the document and side information.The model takes inspiration from the neural topic model (NTM) [33] which only applies for textual inputs.We first introduce the NTM, and how we adapt NTM to grasp the semantic meanings of multimodal inputs. Overall, NTM assumes the existence of underlying topics throughout the inputs.Concretely, NTM encodes the bag-of-word term vector of the input to a topic distribution variable, based on which it reconstructs the bag-of-word representation.In the reconstruction process, the topic representations can be extracted from a projection matrix.In our UTM, instead of reconstruction, we aim to predict the bag-of-word vector of the target summary based on the two inputs.The benefits are threefold.Firstly, we no longer require the input to be in textual format and can encode various modal semantic meanings of the inputs into the distribution variable.Secondly, we can preserve the most salient information from the inputs, instead of keeping them all, which is consistent with the information filtering attribute of the summarization task.Lastly, the combination of topic modeling on document and side input can better fit the target summary topic distribution. Concretely, we first process the document into the bag-ofword representation ℎ ∈ R | | , where | | is the vocabulary size.The same is true for the side information when it is in the textual format, leading to ℎ .When the side information is images or videos, we use EfficientNet [41] to obtain the vector representation, also denoted as ℎ .We then employ an MLP encoder to estimate their exclusive priors * and * , which are used to generate the topic variables of the two inputs through a Gaussian softmax: where * can be or , (.) and (.) are neural perceptrons with ReLU activation.N (.) is a Gaussian distribution. * ∈ R are the latent topic variables of the document or the side information. Given the topic variables and , UTM predicts the bag-ofword representation of target summary, i.e., : We add the topic variables of the two inputs together to include information from two sources, as well as to emphasize the salient information that is shared between both sides.Based on the topic distribution we construct the bag-or-word of target summary .In this process, the weight matrix of W ∈ R ×| | can be regarded as the topic-word relationship, where W , indicates the weight of the -th word in the -th topic, and is the topic number. ∈ R reflects the proportion of each topic, and higher score means the -th topic is more important.We will take advantage of this distribution to determine the main topics of each case in the next section. The objective function is to simultaneously minimize the Wasserstein distance between ( * ) and ( * | ℎ * ), and maximize the constructing probability of : where ( * ) is the standard Gaussian distribution.We employ the Wasserstein distance instead of traditional KL-divergence since the former is proved to be superior to the latter by experiments [42]. Topic-aware Graph Encoder Graph Construction.Since we have extracted the salient topic distribution of the two inputs, we can use them as bridges to let two information sources interact with each other.We thus design a topic-aware graph encoder where we model the relation between document and side inputs through different channels, i.e., by direct edges and indirect edges through topics.By direct edges, we let information flow globally in the graph, while by indirect edges, the document communicates specific information with side input under different topics. Node Initialization.For both inputs, we use the Transformer encoder [43] or the EfficientNet model to encode each document or image independently to capture the contextual information.We first introduce the Transformer architecture in detail, and we will also propose variations of the attention mechanism.Generally, Transformer consists of a stack of token-level layers to obtain contextual word representations in the document or side information.We take the document to illustrate this process. For the -th Transformer layer, we first use a fully-connected layer to project word state ℎ ,−1 ).Then, the updated representation of is formed by linearly combining the entries of with the weights: where stands for hidden dimension.The above process is summaized as MHAtt(ℎ ,−1 , ℎ ,−1 * ), where * denotes index from 1 to .Then, a residual layer takes the output of self-attention sublayer as the input: ĥ,−1 ℎ , = LN ĥ,−1 where FFN is a feed-forward network with an activation function, LN is a layer normalization [2].Graph Encoding.The document and side graphs communicate with each other through topic-guided and direct interactions.The topic-guided interaction starts from the learning of document representations and the side representations, and then the topic representations.The direct interaction only updates the document and side nodes.We omit the layer index here for brevity. Concretely, in the topic-guided interaction, the document and side information representations are updated from three sources.Take the document nodes for example, they are updated by (1) performing self-attention across document nodes; (2) performing cross-attention to obtain the topic-aware document representations, as shown in Figure 3(a); and (3) performing our designed topic-guided attention mechanism, as shown in Figure 3(b).This mechanism starts with the application of self-attention mechanism on the document nodes: Then, taking the topic representation ℎ = =1 ℎ as condition, the attention score on each original document representation ℎ is calculated as: The topic-aware document representation is ℎ weighted by , denoted as ℎ .In this way, we highlight the salient part of the two inputs under the guidance of the topics.Last, a feed-forward network is employed to integrate three information sources. The topic representation is updated by performing (1) self-attention and (2) cross-attention on the adjacent document and side nodes.In the cross-attention, the topic representation is taken as the query, and the document and side representations are taken as the keys and values.Lastly, a feed-forward network integrates two information sources to obtain the updated topic representation. Aside from communicating the graphs through topics, we also have a direct interaction that concatenates all document and side nodes in the graph and then apply a self-attention mechanism. The topic-aware and direct interactions are processed iteratively, and we denote the final updated representations for document, side information, and topics as ĥ ∈ R × , ĥ ∈ R × , and ĥ ∈ R × . Summary Decoder Since the decoder needs to incorporate the information from multiple sources in the graph encoder, we design a hierarchical decoder that firstly focuses on the topics and then attends to inputs.This topic-guided mechanism indicates which topics should be discussed in each decoding step.Our hierarchical decoder follows the style of Transformer, and we omit the layer index next for brevity. For each layer, at the -th decoding step, we first apply the masked self-attention on the summary embeddings (MSAttn), obtaining the decoder state g .The masking mechanism ensures that the prediction of the position depends only on the known output of the position before : Based on g we compute the cross-attention scores over topics: where , ∈ R × , , ∈ R .We then use the topic attention to guide the attention on the other two graphs, where the topics can be regarded as an indicator of saliency.Taking the main document for example, we incorporate , with similarity weight to obtain the document attention weight , ∈ R : where ∈ R × is the similarity matrix between the topics and document.In a similar way, we obtain the attention weights , ∈ R on the side information. The attention weights , , , , and , are then used to obtain the context vectors , , , , and , , respectively.Take the topics as an example: These context vectors, treated as salient contents summarized from various sources, are concatenated with the decoder hidden state to produce the distribution over the target vocabulary: All the learnable parameters are updated by optimizing the negative log likelihood objective function of predicting the target words: Triplet Contrastive Learning The challenge of unifying different modalities is to align and unify their representations at different levels.In this section, we propose a triple contrastive learning mechanism that determines whether the textual and visual representations match each other.We can utilize the large-scale non-paired text corpus and image collections to learn more generalizable textual and visual representations, and improve the capability of vision and language understanding.As shown in the fourth part in Figure 2, the main idea is to let the representations of the paired images or text close to each other in the semantic space while the non-paired be far away.For the positive sample construction, we apply mean pooling on the representations of ĥ * as the overall representation for the document, and for side information in the same way.The final decoder state of generator ŷ is taken as the overall representation for the generated summary, as it stores all the accumulated information.For the negative sample construction, we randomly sample a negative side input, document, or the generated summary from the same training batch for each case.Note that different from the positive pairs, the sampled side and texts are encoded individually without graph encoders as they mainly carry weak correlations.In this way, we can create positive examples X + consisting of paired documentside samples (, ), X + consisting of paired side-generation (, ), and X + consisting of paired document-generation (, ).Negative examples are denoted as X − , X − , and X − , respectively. Based on these positive and negative pairs, the following contrastive loss L is utilized to learn detailed semantic alignments across vision and language: EXPERIMENTS 4.1 Dataset We evaluated our model on three public summarization datasets with side information: (1) CNN dataset is collected by Narayan et al. [35] Baselines Our extractive baselines include: Lead3 produces the three leading sentences of the document as the summary as a baseline. SideNet [35] consists of an attention-based extractor with attention over side information. BERTSumEXT [28] is an extractive summarization model with pretrained BERT encoder that is able to express the semantics of a document and obtain representations for its sentences.It only takes the document as input. Abstractive single-document and multi-document summarization baseline models include: BERTSumABS [28] is an abstractive summarization system built on BERT base with a new designed fine-tuning schedule.It only takes the document as input.We also have a BERTSumABSconcat that concatenates the textual side information with the original document. SAGCopy [48] is an augmented Transformer with a self-attention guided copy mechanism. EMSum [55] is an entity-aware model for abstractive multidocument summarization with BERT encoder.[9] is a multi-document summarizer with topics act as cross-document semantic units. TG-MultiSum The above two multi-document summarization baselines take the textual side input as the second document.We also compare our model with multimodal summarization baselines including: MOF [58] is a summarization model with a multimodal objective function with the guidance of multimodal reference to use the loss from the summary generation and the image selection. VMSMO [22] is a dual interaction-based multimodal summarizer with multiple inputs.The four above models are all equipped with BERT base encoder for fairness. OFA [45] is a recent unified paradigm for multimodal pretraining.We adapt it for the side-aware summarization setting, where we directly concatenate the document and side representations encoded by OFA.We choose OFA base version for fairness. Evaluation Metrics For both datasets, we evaluated by standard ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) [24] on full-length F1, which refer to the matches of unigram, bigrams, and the longest common subsequence, respectively.We then used BERTScore (BS) [53] to calculate a similarity score between the summaries based on their BERT embeddings. Schluter [38] noted that only using the ROUGE metric to evaluate generation quality can be misleading.Therefore, we also evaluated our model by human evaluation.Concretely, we asked three PhD students proficient in English to rate 100 randomly sampled cases generated by models from the CNN and WikiHow datasets which cover different domains.The setting follows [30] with four times larger evaluation scale.The evaluated baselines are EMSum, TG-MultiSum, and OFA which achieve top performances in automatic evaluations. Our first evaluation quantified the degree to which the models can retain the key information following a question-answering paradigm [27].We created a set of questions based on the gold-related work and examined whether participants were able to answer these questions by reading generated text.The principle for writing a question is that the information to be answered is about factual description, and is necessary for the summarization.Two annotators wrote three questions independently for each sampled case.Then they together selected the common questions as the final questions that they both consider to be important.Finally, we obtained 147 questions, where correct answers are marked with 1 and 0 otherwise.Our second evaluation study assessed the overall quality of the related works by asking participants to score them by taking into account the following criteria: Informativeness (does the related work convey important facts about the topic in question?),Coherence (is the related work coherent and grammatical?), and Succinctness (does the related work avoid repetition?).The rating score ranges from 1 to 3, with 3 being the best.Both evaluations were conducted by another three PhD students independently, and a model's score is the average of all scores. Implementation Details All models were trained for 200,000 steps on NVIDIA A100 GPU.We implemented our model in Pytorch and OpenNMT [19].For neural-based baselines except OFA and our model, we used the 'bertbase' or 'bert-base-chinese' versions of BERT for fair comparison.Both source and target texts were tokenized with BERT's subwords tokenizer.Our Transformer decoder has 768 hidden units and the hidden size for all feed-forward layers is 2,048.In all abstractive models, we applied dropout with probability 0.1 before all linear layers; label smoothing [40] with smoothing factor 0.1 was also used.For CNN dataset, the encoding step is set to 750 for a document and 70 for side information.The minimum decoding step is 30, and the maximum step is 50.For WikiHow dataset, the four parameters are set to 600, 10, 30, 65.For the Chinese VMSMO dataset, the parameters are 200, 125, 10, 50, where 125 is the encoded frame number.The video frames are selected every 25 frames to ensure the continuity of the images, similar to [22].We used Adam optimizer as our optimizing algorithm.We also applied gradient clipping with a range of [−2, 2] during training.During decoding, we used beam search size 5, and tuned the for the length penalty [46] between 0.6 and 1 on the validation set; we decoded until an end-of-sequence token is emitted and repeated trigrams are blocked.Our decoder applies neither a copy nor a coverage mechanism, since we also rarely observe issues with out-of-vocabulary words in the output; moreover, trigram-blocking produces diverse summaries managing to reduce repetitions.On VMSMO, the video frames are selected every 25 frames to ensure the continuity of the images, similar to [22].We selected the 5 best checkpoints based on performance on the validation set and report averaged results on the test set. Experimental Results Automatic Evaluation.The performance comparison is shown in Table 1.Firstly, we can see that the attributes of the three datasets vary.CNN is a news dataset with a pyramid structure, where Lead3 and extractive methods achieve higher performance than other datasets.Secondly, combining side information by simple concatenation cannot make full use of it, as we can see that the performance of BERTSumABS-concat does not improve significantly compared with BERTSumABS.Incorporating side information by multidocument summarization structures is a better way utilize side information, but they cannot be applied in the multimodal scenario, and their improvements are also limited.Thirdly, the recent multimodal pretrained baseline OFA achieves relatively good performance on VMSMO, but has borderline performance on singlemodal datasets CNN and WikiHow.This is consistent with the previous observation that OFA has better performance on crossmodal tasks [45].Specifically, OFA has trouble when generating long text, which leads to a performance drop when the target is relatively long.Finally, our USS model obtains consistently better performance on all three datasets.Specifically, USS achieves 2.0/1.6/1.7/0.3 improvements on R1, R2, RL, and BERTScore compared with one of the latest baseline EMSum on the CNN dataset, and obtains 1.4/1.0/1.3/0.7 improvements on the VMSMO dataset compared with TG-MultiSum. Human Evaluation.As shown in Table 2, on both evaluations, participants overwhelmingly prefered our model.The kappa statistics are 0.42, 0.49, and 0.45 for Info, Coh, and Succ respectively, indicating the moderate agreement between annotators.All pairwise comparisons among systems are statistically significant using Graph Layer Number a two-tailed paired t-test for strong significance for = 0.01.We also provide examples of the system output in Table 3.We can see that with the side information showing the figure of the main character, the lottery result, and the mobile phone, USS successfully captures the gist information that "a man post a lottery ticket on social media" in the generated summary.For BERTSumABS-concat and VMSMO, they miss key information such as "where he post the lottery" and "how quickly the lottery was falsely claimed". ANALYSIS AND DISCUSSION 5.1 Ablation Study We conducted ablation tests to assess the importance of the topic modeling, graph encoder, and triplet contrastive learning.For USS w/o unified topic modeling, only the traditional neural topic model (NTM) is applied to the textual document to obtain the topic representations.For USS w/o graph encoder, there are no topic-related interactions, and the outputs from the topic modeling are directly used for decoding.The ROUGE score results are shown in the last block of Table 1.All ablation models perform worse than USS in terms of all metrics, which demonstrates the preeminence of USS.Concretely, graph encoder makes a great contribution to the model, improving the performance on CNN by 1.3 in terms of the R2 score, and improving the R2 score by 1.0 on the WikiHow dataset.Contrastive learning also contributes to the model, bringing 0.7 RL improvements on the CNN dataset.We further conduct experiments on VMSMO to probe into the impact of two important parameters, i.e., the topic number and the graph layer number .From Figure 4, we can see that for both experiments, the ROUGE scores increase with the topic and layer number, to begin with.After reaching the upper limit it begins to drop.Note that with only one graph layer our model outperforms the best baseline, which demonstrates that our topic-aware graph module is effective.Hence, we set the default topic number to 100 and the graph layer number to 4. Topic Quality Analysis In this subsection, we qualitatively and quantitatively investigate the quality of the selected topics.We compared the learned topics from our model with baseline topic models trained on the CNN dataset including (1) GSM [33], a classic NTM model with VAE and Gaussian softmax, and (2) W-LDA [34], a novel neural topic model in the Wasserstein autoencoders framework. In Table 4, we use the coherence score [37] to quantitatively evaluate inferred topics, which has been proved highly consistent with human evaluation.We also show the inferred words for the topic "economy".It can be seen that our USS outperforms other baselines in terms of the coherence score, and the inferred topic words are more accurate and concentrated.The possible reasons are twofold.Firstly, our model incorporates the main and side inputs to predict the topic distribution of the target summary.The multiple descriptions of the same content bring more topic clues, and the prediction task that requires reasoning and filtering abilities makes the topic model strong and robust.Secondly, the assistant summarization task can boost the performance of topic modeling. Effect of Unified Topic Modeling Since we have verified the quality of the topics, we are interested to see the effect of the learned topics on summarization, i.e., how the unified topic modeling helps summarization?Table 3: Examples of the generated summary by baselines and USS on CNN and VMSMO datasets.Unfaithful and redundant information is highlighted in blue.In the second case, keywords with the same semantics are highlighted in red and green.We first examine from the encoder side, where we show the learned topic distributions from two inputs for the case in Table 3 in Figure 5(a).It can be seen that though the document and side information has different topic distributions, generally, they focus on the same important topics, which are related to the ground summary by human evaluation.From the statistic view, we draw the loss of L in Figure 5(b).The curve has a steady downtrend, to begin with, and finally reaches convergence.The above observations demonstrate that the topic modeling can grasp the gist of the target summary and the effectiveness of topic modeling. We next examine the topic effectiveness in the summarization process from the decoder side.We visualize the attention weights , on topics in Figure 6(a) for the same case.It can be seen that the topic attention first emphasizes topic 1, and then on topic 2 as well as topic 3. The three topics are shown in Table 3, which are related to "social media", "crime", and "finance", respectively.This is consistent with the generated sentence, where the keyword starts from "Moments", and then changes to "falsely claimed redemption".In this way, we can see that the topics play a guidance role when generating summaries. Contrastive Learning Analysis We lastly examine the performance of the triplet contrastive learning module by visualizing the contrastive loss curve in Figure 6(b) on VMSMO.It can be seen that the loss score fluctuates at the beginning of the training and gradually reaches convergence.This phenomenon demonstrates that the generated text, the document, and the side information belonging to the same case are getting closer in the semantic space.On the other hand, the unpaired triplets are becoming more distant. CONCLUSION AND LIMITATION In this paper, we proposed a general summarization framework, which can flexibly incorporate various modalities of side information.We first proposed a unified topic model to learn latent topic distributions from various modal inputs.We then employed a topicaware graph encoder that relates one input to another by topics.Experiments on three public benchmark datasets show that our model produces fluent and informative summaries, outperforming strong systems by a wide margin. Figure 1 : Figure 1: Articles with various side information and summary collected from the CNN and WikiHow APPs.The side information (video and user query) can enhance the summarization performance. Figure 2 : Figure 2: Overview of USS, which consists of four parts: (1) Unified Topic Modeling (left) jointly learns latent topics from both inputs; (2) Topic-aware Graph Encoder (bottom) relates the document to the side information; (3) Summary Decoder (right) with hierarchical topic-aware attention mechanism; and (4) Triplet Contrastive Learning (top) aligns the multiple inputs and outputs into a unified semantic space. Figure 3 : Figure 3: (a) Cross attention mechanism for document and topic nodes.(b) Topic-guided attention mechanism, which semantic information across the document and side information under the guidance of the topics. Figure 4 : Figure 4: (a) Relationships between the number of topics and (the average of RG-1, RG-2 and RG-L) .Best viewed in color.(b) Relationships between the number of graph layer and . Figure 5 : Figure 5: (a) The multi topic distribution of the document and side information.(b) UTM loss (L ) curve in the training process. Figure 6 : Figure 6: (a) Visualizations of the attention weights on topics.(b) Contrastive learning loss curve (L ) in the triplet contrastive learning module. Article: Recently, a citizen of Nantong, Jiangsu, won a lottery ticket.He took photos of the entire lottery ticket and uploaded them to Moments.Unexpectedly, someone else falsely claimed the lottery winnings as his own based on the information on the lottery.The lottery was redeemed within only 35 seconds after the start of the redemption day as investigated by the Sports Lottery Center.近日,江苏南通,市民张先生彩 票中了奖后,将整张彩票拍照上传了朋友圈,不料被人根据彩票上的信息冒领了奖金。经体彩中心调查当天开始兑奖后仅35秒奖 金就被兑走。 Reference summary: Too excited to win the lottery, post the lottery in Moments and got falsely claimed immediately 中奖太兴奋,朋友 圈晒彩票瞬间被冒领 OFA: 35 seconds after winning, the lottery was falsely claimed 中奖35秒后被冒领彩票 MOF: Man showed the winning lottery and was falsely claimed in 35 seconds 男子晒中奖35秒被冒领 VMSMO: Post lottery in Moments and get falsely claimed 朋友圈晒中奖被冒领 USS: Friends from Moments falsely claimed the lottery, only 35 seconds after the redemption started 朋友圈冒领彩票,中奖35秒就被兑 走 Highest three topics: Topic1: old friend 老朋友, Liang family 梁家, WeChat 微信, phone calls 通电话, Brothers 兄弟俩 Topic2: covet 贪图, steal 偷盗, kidnap 拐骗, holocaust 大屠杀, steal everything 抢光 Topic3: prize 奖金, tens of thousands 好几万, giants 豪门, net flow 净流入, more than 100 million yuan 亿余元 Side information (sampled images from video): As for the topic nodes, we use the intermediate parameters learned from UTM as raw features to build topic representations = ( ), where the -th row of ∈ R × , denoted as ℎ , is a topic vector with predefined dimension . (.) is a neural perceptron with ReLU activation. }. Table 1 : Comparison with other baselines when side information is in text.All our ROUGE scores have a 95% confidence interval of at most ±0.28 as reported by the official ROUGE script.Numbers in bold mean that the improvement to the best baseline is statistically significant (a two-tailed paired t-test with p-value <0.01).'-' indicates unavailability. Table 4 : Coherence score and inferred topic words of different topic models.Blue text denotes repetition or non topic words.
8,244.6
2023-05-19T00:00:00.000
[ "Computer Science" ]
Anterograde Intramedullary Nailing without Bone Grafting for Humeral Shaft Nonunion Associated with Early Exploration of Secondary Radial Nerve Palsy: A Case Report Background: Humeral shaft fractures are relatively common. Complications associated with this type of fracture and its treatment include nonunion and radial nerve palsy. Plate osteosynthesis with autologous bone grafting is considered the gold standard for treating nonunion. However, bone grafts might not always be necessary in cases of hypertrophic nonunion, and treatment should be tailored to the specific type and characteristics of the nonunion. The treatment of radial nerve palsy is debated, with some favoring expectant management based on the nerve’s ability to regenerate, and others preferring early surgical exploration to prevent possible lasting nerve damage. Methods: We present the case of a 46-year-old male patient with a six-year-old humeral shaft fracture resulting in hypertrophic nonunion. We treated the nonunion with anterograde intramedullary nailing without bone grafting. Postoperatively, the patient developed severe radial nerve palsy. After repeated electrophysiological studies, a decision was made to surgically explore the nerve 10 days after the nonunion surgery. The nerve was subsequently found to be intact and treated with neurolysis. Results: Bony union was shown at six months after nonunion surgery. Four months after the nonunion surgery, the patient started to show clinical signs of nerve recovery, and at 12 months he achieved nearly full clinical recovery of radial nerve function. Conclusions: Anterograde intramedullary nailing without autologous bone grafting may be considered an option for treating hypertrophic nonunion. The management of radial nerve palsy requires effective cooperation and communication between patient and physician. Further research is necessary to be able to better predict nerve recovery. The choice of treatment stands in relation to the fracture type, displacement, and patient needs.Conservative treatment is considered by many to be the gold standard of treatment [1,3,4], as the outcomes are generally favorable with union rates between 77.4-100% [1,5].Yet, in the past decades, the trend has shifted toward surgery [1,[5][6][7] mostly with plate osteosynthesis or intramedullary nailing (IMN) [1,8].Nonunion is a major complication of fracture treatment and can be defined as failure of the bone to unite within six months [9,10]. Nonunion of HSF occurs with conservative management as well as surgical treatment, even though the rate is significantly higher in conservatively managed fractures [2,3,8,[10][11][12].Moreover, it is associated with significant morbidity, poor function, and poor quality of life [2,13]. The classification of Weber and Cech (1976) is the most commonly used tool to describe the different types of nonunion: hypertrophic, oligotrophic, and atrophic.[14] Hypertrophic nonunion is biologically active and vital, while atrophic nonunion is associated with biological inactivity and lack of viability [2,9,15]. Treatment of nonunion is mostly performed surgically using a variety of techniques: plate fixation with or without autologous bone grafting (ABG), IMN with or without ABG, bone struts, or external fixation [2,16].As a more advanced treatment strategy, synthetic bone graft substitutes have appeared in recent years and find increasing use [17]. A complication highly associated with HSF and its treatment, is radial nerve palsy (RNP), with a prevalence reported between 2-19% [1,7,18,19].RNP can be classified into partial or complete, or into primary or secondary in accordance with time to appearance after injury.Primary injury appears at the time of injury, while secondary RNP occurs during treatment [20][21][22]. Seddon's classification of peripheral nerve injury divides them into three categories, termed neuropraxia, axonotmesis, and neurotmesis.Another classification commonly used is Sunderland's, which divides nerve injury into five degrees [23]. Neuropraxia, corresponding to Sunderland's first degree, can be defined as nerve contusion with local nerve conduction block, with injury to the myelin sheath but intact axons.In axonotmesis, respectively, Sunderland 2, 3, and 4, injury to the axons has occurred while supporting structures (epineurium, endoneurium, and Schwann cells) remain intact to a variable extent.Neurotmesis (Sunderland 5) is a complete dissection of the nerve [24,25].Some refer to a sixth Sunderland degree, representing a mixed nerve lesion involving both axonal damage and a conduction block [26]. Axonotmesis and neurotmesis are associated with Wallerian degeneration (WD).WD is a sequence of physiologic and metabolic changes characteristic of peripheral nerve injury (PNI) [27].It is an innate immune-regulated mechanism initiated by axonal injury [28].It starts 24-36 h post-injury and is completed within approximately 9-10 days, after which it can be objectively confirmed using electrophysiological studies (EPS) [25,26,29].EPS, including nerve conduction studies (NCS) and electromyography (EMG), is used to localize and classify a nerve lesion according to type, severity, and prognosis, and for monitoring [26,29]. The management of RNP can be categorized under expectant treatment or early exploration.The former encompasses a watchful waiting attitude in accordance with the high rate of recovery of the radial nerve, without the need for surgical intervention [6,7,21,30].If no signs of recovery appear within a period of 2-3 months, then late surgical exploration of the nerve can be considered [18,20,23].Early exploration is performed within eight weeks from injury [7,18,23]. Depending on the intraoperative status of the explored nerve, neurosurgical treatment may consist of neurolysis, neurorrhaphy, nerve or tendon transfer [21,27]. The Quick Disabilities of the Arm, Shoulder, and Hand questionnaire (QuickDASH) is a widely used and well-tested instrument to assess upper extremity function.Lesser values indicate better function [31]. In the following, we present a case of a patient with longstanding hypertrophic, maligned nonunion of a humeral shaft fracture, treated with anterograde IMN, without ABG.Post-operatively, the patient developed severe RNP, which was managed with early surgical exploration and neurolysis. Case Report A 46-year-old, right-hand dominant male presented for evaluation of a non-united left HSF causing functional impairment affecting his daily activities and work performance.Six years earlier, he had suffered a transverse fracture of the middle third of his humeral shaft, which was treated surgically with plate fixation and seven screws in another service.Approximately two weeks postoperatively, the osteosynthesis material failed after repeated stress.The patient was treated in his hometown service, where it was decided to leave the material in place and continue the treatment conservatively with six weeks of immobiliza-tion in a U-slab cast.The patient was lost to follow-up until he presented to our service with the complaints described above. On physical examination, the patient exhibited a varus deformity of the left brachium.Upon palpation, abnormal movement in the fracture site could be elicited.The patient's main disturbance was functional impairment, particularly a lack of grip strength.Elbow and shoulder range of motion (ROM) were unaffected. Plain radiographs showed a hypertrophic non-united humeral shaft fracture with 40 • varus malignment and the presence of deteriorated osteosynthesis material, consisting of a plate and seven screws (Figure 1). failed after repeated stress.The patient was treated in his hometown service, where it was decided to leave the material in place and continue the treatment conservatively with six weeks of immobilization in a U-slab cast.The patient was lost to follow-up until he presented to our service with the complaints described above. On physical examination, the patient exhibited a varus deformity of the left brachium.Upon palpation, abnormal movement in the fracture site could be elicited.The patient's main disturbance was functional impairment, particularly a lack of grip strength.Elbow and shoulder range of motion (ROM) were unaffected.Plain radiographs showed a hypertrophic non-united humeral shaft fracture with 40° varus malignment and the presence of deteriorated osteosynthesis material, consisting of a plate and seven screws (Figure 1). Surgical treatment was discussed with the patient, and he was informed about the risks of surgical correction.The patient chose to undergo operative treatment involving removal of the deteriorated hardware, resection of hypertrophic callus, and corrective osteotomy with fixation using anterograde IMN.His informed consent was obtained.On the day of intervention, the patient received a preoperative interscalene nerve block under ultrasound guidance.The patient was placed in the beach chair position on the operating table. The surgical approach was made laterally along the scar of the former incision.Through blunt dissection between the biceps and triceps, the humerus was exposed, revealing the presence of deteriorated osteosynthesis material partially included in the hypertrophic callus. Firstly, we removed the proximal part of the plate, during which the second screw broke, counting from superiorly.While removing the distal part, the most superior screw of the plate fragment broke.Using a hollow reamer, the remaining screw fragments were removed. The radial nerve was identified at the level of the distal humerus.The hypertrophic callus was exposed in a circular fashion, and corrective osteotomy was performed with resection of the callus.Surgical treatment was discussed with the patient, and he was informed about the risks of surgical correction.The patient chose to undergo operative treatment involving removal of the deteriorated hardware, resection of hypertrophic callus, and corrective osteotomy with fixation using anterograde IMN.His informed consent was obtained. On the day of intervention, the patient received a preoperative interscalene nerve block under ultrasound guidance.The patient was placed in the beach chair position on the operating table. The surgical approach was made laterally along the scar of the former incision.Through blunt dissection between the biceps and triceps, the humerus was exposed, revealing the presence of deteriorated osteosynthesis material partially included in the hypertrophic callus. Firstly, we removed the proximal part of the plate, during which the second screw broke, counting from superiorly.While removing the distal part, the most superior screw of the plate fragment broke.Using a hollow reamer, the remaining screw fragments were removed. The radial nerve was identified at the level of the distal humerus.The hypertrophic callus was exposed in a circular fashion, and corrective osteotomy was performed with resection of the callus. We proceeded with the fixation of the osteotomy using an anterograde IMN.A 4 cm incision was made at the anterolateral level of the acromion, followed by dissection until exposure of the supraspinatus tendon.The tendon was incised in line with its fibers.Using an awl, the entry point for the nail was created and a guide wire was introduced under direct visualization of the osteotomy site.The humeral canal was prepared with 9 mm and 10 mm reamers.A 360/9 mm nail was selected and inserted.Proximal locking was performed percutaneously with two screws and distally under direct visualization, with the distal screw in dynamic mode.All screws were inserted in a transverse fashion (Figure 2). Neurol.Int.2024, 16, FOR PEER REVIEW 4 We proceeded with the fixation of the osteotomy using an anterograde IMN.A 4 cm incision was made at the anterolateral level of the acromion, followed by dissection until exposure of the supraspinatus tendon.The tendon was incised in line with its fibers.Using an awl, the entry point for the nail was created and a guide wire was introduced under direct visualization of the osteotomy site.The humeral canal was prepared with 9 mm and 10 mm reamers.A 360/9 mm nail was selected and inserted.Proximal locking was performed percutaneously with two screws and distally under direct visualization, with the distal screw in dynamic mode.All screws were inserted in a transverse fashion (Figure 2). Placement was confirmed using intraoperative fluoroscopy.Shoulder and elbow ROM and stability of fixation were tested on the operating table.Layered wound closure was subsequently performed.On the first postoperative day after cessation of the anesthesia effect, the patient continued to show a deficit of wrist and finger extension, and mild paresthesia in the territory supplied by the radial nerve.A neurologist was consulted and performed EPS on the third postoperative day which showed complete denervation of the target terrain of the radial nerve, excluding the long head of the triceps brachialis.According to the recommendation of the neurologist, the patient received dexamethasone 8 mg for seven days.A control EPS was undertaken on the seventh day, with the same result. On the tenth postoperative day, the patient repeated the EPS with NCS and needle EMG (Figure 3), which showed an aspect of recent, severe neuropathy of the left radial nerve with a complete absence of motor response and signs of Wallerian degeneration. There was absent motor unit recruitment in almost all muscles supplied by the radial nerve, except the brachioradialis muscle which showed some fibrillations.The long head of the triceps brachii muscle had normal innervation.The location of the nerve lesion was therefore diagnosed to be inferior to the axilla, distal to the branch to the long head of the triceps. The performing neurologist's recommendation was a surgical exploration of the radial nerve.This was discussed with the patient and with his consent he was transferred to the Department of Plastic Surgery.Placement was confirmed using intraoperative fluoroscopy.Shoulder and elbow ROM and stability of fixation were tested on the operating table.Layered wound closure was subsequently performed. On the first postoperative day after cessation of the anesthesia effect, the patient continued to show a deficit of wrist and finger extension, and mild paresthesia in the territory supplied by the radial nerve.A neurologist was consulted and performed EPS on the third postoperative day which showed complete denervation of the target terrain of the radial nerve, excluding the long head of the triceps brachialis.According to the recommendation of the neurologist, the patient received dexamethasone 8 mg for seven days.A control EPS was undertaken on the seventh day, with the same result. On the tenth postoperative day, the patient repeated the EPS with NCS and needle EMG (Figure 3), which showed an aspect of recent, severe neuropathy of the left radial nerve with a complete absence of motor response and signs of Wallerian degeneration. There was absent motor unit recruitment in almost all muscles supplied by the radial nerve, except the brachioradialis muscle which showed some fibrillations.The long head of the triceps brachii muscle had normal innervation.The location of the nerve lesion was therefore diagnosed to be inferior to the axilla, distal to the branch to the long head of the triceps. The performing neurologist's recommendation was a surgical exploration of the radial nerve.This was discussed with the patient and with his consent he was transferred to the Department of Plastic Surgery. The exploration was performed under general anesthesia on the 13th day after the nonunion correction. Intraoperatively, the plastic surgeon found that the radial nerve remained intact.They performed a decompression with partial excision of fibrotic tissue and evacuation of about 50 mL lysed hematoma, followed by neurolysis along the entire trajectory of the radial nerve in the brachium.At the level of the osteotomy site, the radial nerve was adherent to an approximately 0.5 mm bony spur from which it was liberated. OR PEER REVIEW 5 The exploration was performed under general anesthesia on the 13th day after the nonunion correction. Intraoperatively, the plastic surgeon found that the radial nerve remained intact.They performed a decompression with partial excision of fibrotic tissue and evacuation of about 50 mL lysed hematoma, followed by neurolysis along the entire trajectory of the radial nerve in the brachium.At the level of the osteotomy site, the radial nerve was adherent to an approximately 0.5 mm bony spur from which it was liberated. Postoperatively, the patient reported improvement in sensory function of the radial nerve, without amelioration of the motor deficit. The patient was transferred back to our service and discharged with instructions to begin physical therapy immediately. Wound and fracture healing were uneventful.Six months after the nonunion surgery, bony union was confirmed via plain radiography (Figure 4).Four months after nonunion correction, the patient reported the first signs of radial nerve recovery.At 12 months, the patient reported a QuickDASH score of 11.4, with a score of 26 in the optional work module.Repeated EPS at 13 months post-neurolysis demonstrated aspects of Postoperatively, the patient reported improvement in sensory function of the radial nerve, without amelioration of the motor deficit. The patient was transferred back to our service and discharged with instructions to begin physical therapy immediately. Wound and fracture healing were uneventful.Six months after the nonunion surgery, bony union was confirmed via plain radiography (Figure 4).Four months after nonunion correction, the patient reported the first signs of radial nerve recovery.At 12 months, the patient reported a QuickDASH score of 11.4, with a score of 26 in the optional work module.Repeated EPS at 13 months post-neurolysis demonstrated aspects of chronic radial nerve neuropathy, with reduced amplitude and velocity in motor and sensory conduction of the left radial nerve, without signs of a conduction block or active denervation. Discussion We described a case of longstanding hypertrophic, maligned nonunion of a humeral shaft fracture treated with anterograde IMN without ABG.Immediately after surgery, the patient developed an RNP with complete absence of motor response and partial sensory deficit.The RNP was treated with early surgical exploration and neurolysis.The onset of RNP recovery began at 16 weeks and almost fully recovered by 12 months. The causes of impaired bone healing are diverse.Consequently, fracture nonunion is a heterogeneous entity, and its treatment should be tailored accordingly [9]. The requirements for successful fracture healing can be divided into biological and mechanical.Giannoudis et al. [32] introduced the "Diamond concept of fracture healing" consisting of three biological factors: osteogenic cell population, osteo-inductive stimuli, and osteoconductive matrix scaffold.The fourth factor is mechanical stability. In hypertrophic nonunion, the last factor is lacking, while the biology is intact [2,9,15,[32][33][34]. Callus formation is the body's physiological response to interfragmentary fracture mobility, attempting to reduce movement to be able to achieve consolidation.If a fracture treatment, surgical or conservative, is unable to keep the local strain <10%, no union can be expected as only fibrous tissue can support this amount of mobility.The result is a hypertrophic nonunion if there is adequate blood flow and residual cell vitality [15,[32][33][34].It is characterized by stiff or rigid mobility in the nonunion area [15]. Plate fixation with ABG is considered the gold standard treatment for nonunion [2,10,16,34].Peters et al. [16] in their review of 36 studies aimed to compare union rates among operative strategies, and found union rates of 98% for plate fixation with ABG, compared to 95% without ABG.IMN with ABG had a union rate of 88% and without 66%.Bone struts had a union rate of 92% and external fixators 98%, but were associated with a higher rate of complications (20-22%).The type of nonunion was not taken into account. Discussion We described a case of longstanding hypertrophic, maligned nonunion of a humeral shaft fracture treated with anterograde IMN without ABG.Immediately after surgery, the patient developed an RNP with complete absence of motor response and partial sensory deficit.The RNP was treated with early surgical exploration and neurolysis.The onset of RNP recovery began at 16 weeks and almost fully recovered by 12 months. The causes of impaired bone healing are diverse.Consequently, fracture nonunion is a heterogeneous entity, and its treatment should be tailored accordingly [9]. The requirements for successful fracture healing can be divided into biological and mechanical.Giannoudis et al. [32] introduced the "Diamond concept of fracture healing" consisting of three biological factors: osteogenic cell population, osteo-inductive stimuli, and osteoconductive matrix scaffold.The fourth factor is mechanical stability. In hypertrophic nonunion, the last factor is lacking, while the biology is intact [2,9,15,[32][33][34]. Callus formation is the body's physiological response to interfragmentary fracture mobility, attempting to reduce movement to be able to achieve consolidation.If a fracture treatment, surgical or conservative, is unable to keep the local strain <10%, no union can be expected as only fibrous tissue can support this amount of mobility.The result is a hypertrophic nonunion if there is adequate blood flow and residual cell vitality [15,[32][33][34].It is characterized by stiff or rigid mobility in the nonunion area [15]. Plate fixation with ABG is considered the gold standard treatment for nonunion [2,10,16,34].Peters et al. [16] in their review of 36 studies aimed to compare union rates among operative strategies, and found union rates of 98% for plate fixation with ABG, compared to 95% without ABG.IMN with ABG had a union rate of 88% and without 66%.Bone struts had a union rate of 92% and external fixators 98%, but were associated with a higher rate of complications (20-22%).The type of nonunion was not taken into account. Oliver et al. [10] studied 86 patients undergoing ORIF with plate fixation with or without ABG and found no significant difference between the two cohorts.In addition, they found no significant difference between the treated nonunion types.Furthermore, 95% of nonunion united without supplementary ABG. Micic et al. [35] reported a 90% union rate in 20 patients with humeral shaft nonunion treated with locking IMN without ABG.These high union rates without bone grafting have called the gold standard into doubt because the graft harvesting, with the anterior iliac crest being the most commonly used, is associated with considerate donor site complication rates of 20-39%, with infection, hematoma, fracture, pain and dysesthesias representing some of the potential complications [2,10]. It is advocated to take the type of nonunion and the underlying pathologic process into greater consideration when deciding on the nonunion treatment [33,34]. In our patient, we did not use ABG because the type of nonunion suggested a good local biological background for future healing, and the patient's history of early failure of plate osteosynthesis after repeated stress led us to opt for IMN. IMN is less invasive, causes fewer circulatory problems, and has a lower risk of radial nerve injury than plate fixation [16,36].It is able to provide stable fixation combined with load-sharing and allows for early weight-bearing and rehabilitation [33,37].It was also found to have significantly lesser intraoperative blood loss, shorter operative time, and shorter hospitalization period when compared to plate fixation in the treatment of nonunion [35]. The reaming process associated with IMN also improves stability by enhancing the bone-nail contact area and increasing periosteal blood circulation, supporting bone formation.The reaming process carries important biological effects for nonunion healing, as its debris is rich in osteoprogenitor cells and growth factors, and it transports mesenchymal stem cells into the intramedullary space, which can be considered "internal bone grafting" [9,33]. The disadvantages of IMN include reduced intra-fragmentary compression compared to plate fixation, which in the humerus as a non-weight-bearing bone having intrinsically less axial compressive forces can lead to less consolidation [16].Shoulder pain and stiffness are also more common with IMN [8,11,37]. Nonunion significantly impacts a patient's health-related quality of life (QoL).Patients suffering from nonunion report lower QoL than most other patients in the musculoskeletal disorder population.It also ranks significantly lower than patients with chronic diseases such as chronic obstructive pulmonary disease, acute myocardial infarction, and stroke [13].Even after successful achievement of the union, their QoL seems to remain below the standard reference population [38]. This does not change the importance of treating nonunion, as the effect on the patient's situation is impactful, as suggested by Vincken et al. [13] when they seek to explain the large difference in QoL rates between their and another QoL study, theirs being much lower, but of a population of untreated nonunion patients, while in the other study patients had already reached boney consolidation [38]. Before any surgery, patients have to be thoroughly informed about the possible risks.Particularly associated with surgery of the humeral shaft are injuries to the radial nerve, due to its anatomical relation and variable course even under physiologic conditions [22,25,39]. The radial nerve starts as a branch of the posterior cord of the brachial plexus.It enters the posterior compartment of the brachium through the triangular fossa, crosses from medially to laterally during which it is in direct contact with the humeral periosteum for a distance of 6.3 cm ± 1.7 cm, and pierces the lateral intermuscular septum distally [40,41]. This close contact with the humerus and passage through rigid spaces, such as the intramuscular septum, leaves the radial nerve at increased risk for injury in cases of HSF and their treatment [18,20].Iatrogenic RNP due to surgical management of HSF has an incidence between 1.9-3.3%, or even higher after non-union repair.A consecutive retrospective cohort review examined the rate of RNP in humeral shaft non-union surgery, finding 6.9% of 379 patients showed iatrogenic RNP.Among them, 15.8% had persistent deficits at twelve months follow-up [40,42]. The radial nerve displays a considerable capacity for self-regeneration, as 70-80% of primary RNP recover spontaneously, with an even higher rate if applied exclusively to secondary RNP [7,19,43,44].The average onset of recovery is about 7-10 weeks and full recovery at 5-8 months [6,7,19].Because of this, the management of RNP is a topic of debate, with the majority supporting expectant strategies with late exploration if indicated [1,7]. A generally proposed strategy is repeated controls using EPS at three-, six-, and twelve weeks [20].Surgical exploration is recommended only in patients without signs of improvement after eight-to-twelve weeks [18,20,23].Some support even longer observation periods, between four-to-five months and sometimes up to six months [19,43,45]. In 2005, Shao et al. [19] published a systematic review of RNP associated with HSF.They found no clinically significant difference in the final result between the early vs. late exploration group, suggesting that there was no negative effect upon nerve recovery when managed initially expectant. This can allow for a spontaneous return to function, avoiding unnecessary surgery [20,22,23].By delaying surgery, the neurilemmal sheath has time to thicken, which might facilitate easier neurorrhaphy if late repair is indicated [19,20,22]. Ilyas et al. [7] performed an update of Shao's systematic review.They included the cases of the previous study, thereby covering a time span from 1964 until August 2017.They divided the patients into initial expectant treatment, delayed surgical exploration (after eight weeks), and early surgical exploration.But unlike Shao et al. before, they did not combine expectant and late surgical treatment in the end, so as to compare the true outcomes of RNP treated early vs. late.The expectant group had a recovery rate of 77.2%.Palsies managed with early exploration within three weeks showed a significantly higher recovery rate of 89.9% (p < 0.001).Moreover, 68.1% of palsies undergoing late exploration recovered.This lower number was assumed to be associated with nerve retraction, distal motor end plate loss, muscular atrophy, and irreversibility of nerve injury due to delayed management. Ilyas et al. found this to challenge the dogma of expectant treatment.They concluded that the question must be asked whether a quota of 1:4 non-surgically treated patients with no spontaneous recovery is acceptable to withhold early surgical treatment.Supporters of early exploration argue that delaying the intervention could compromise nerve recovery due to the risk of progression of nervous degeneration, which can lead to motor end plate loss and irreversible muscular atrophy [18,23,45] which develops after six months [44].The decreased potential of motor neuron regeneration shows already after 7-8 weeks [18]. Early classification and characterization of nervous injury leads to early appropriate treatment, and faster and more complete/predictable recovery [18,23].Reported rates of intraoperative radial nerve status vary.According to Ilyas et al. [7], the radial nerve was in continuity in 63.7%, incarcerated in 10.5%, and lacerated in 26.8%.Others found 95.9% to be in continuity, of which 10% were entrapped, and 14.1% to be transected [6].Rasulic et al. [45] found 57.9% to be in discontinuity.Nerve incarceration and laceration bear poor prognosis for recovery [6,7]. Timing of surgery and surgical technique represent the only prognostic factors effectively influenceable by physicians, in contrast to the mechanism, type, and severity of injury, lesion site, and patient characteristics [45]. Early exploration means another intervention for the patient with the accompanying risks of infection, osteomyelitis, nerve devascularization, and interruption of the nerve's natural environment [18].Others consider this approach safer and easier than a delayed exploration in fibrotic and possibly anatomically distorted tissue [18,22,23].Some consider iatrogenic RNP as an indication for early explorative management [18], while others opt for handling it the same expectant way as primary RNP because it was shown to have similar [19,43] or even better recovery rates [23]. Our patient underwent multiple EPS in the immediate postoperative period on the third-, seventh-, and tenth day, all showing absent motor recruitment distally to the long head of triceps brachii, locating the lesion inferior to the axilla.The recommendation for surgical exploration was given on the tenth postoperative day, after a week of treatment with high-dose corticosteroids showed no signs of improvement. NCS measures the velocity, amplitude, and latency of motor and sensory conduction by external stimulation.EMG records insertional activity, abnormal activity, and motor unit potentials (MUP) directly from the muscle, at rest and during contraction, using needle electrodes [26]. The limitations of EPS in the management of RNP rest in its time dependence.Before days 9-10 after injury, EPS is of no particular diagnostic value because it takes this time interval for WD to occur to differentiate between neuropraxia or a higher-grade injury.It takes three-to-four weeks for muscle fibrillations to develop, signifying axonal injury, which can be mixed injury with some axonotmesis, but also neurotmesis.For EPS to reach sufficient specificity and sensitivity to discern between axonotmesis and neurotmesis, four months have to pass [28,29,39,46]. The challenge lies in finding the balance between the time-dependent diagnostic tool and time-sensitive nerve injury, with the aim being to decrease the rate of avoidable surgery in self-limiting RNP and to increase the chance of timely reconstruction for severe lesions [46]. The EPS performed 13 months after neurolysis showed motor and sensory conduction velocity to be decreased and amplitude severely decreased.Yet, when considering functional outcomes, our patient reported a QuickDASH of 11.4, which can be considered recovered.In the optional work module, the patient's score was 26, so his professional performance was not fully recovered at 12 months post-injury [31].Şahin et al. [47] studied the correlation between electrophysiologic testing and clinical and functional outcomes in patients after traumatic PNI.They found no statistically significant association between EPS and functional recovery at a follow-up time of 11.6 months.Considering this discrepancy, the EPS should be interpreted carefully and in combination with the clinical and functional picture. Iatrogenic RNP can present a frustrating condition for both the patient and treating physician as there exists no gold standard of treatment and the possible causes for the injury are plentiful [29]. The exact cause for this particular RNP remains uncertain: compression by surgical instruments, hematoma formation, adherence to the bone-all or some or one or none of these could be the reason [45].EPS located the lesion inferior to the axilla, distal to the radial nerve branch supplying the long head of the triceps brachii.This location appeared incongruently and distant from our surgical approach.One could explore the possibility that the long head of the triceps brachii received innervation from the axillary nerve rather than the radial nerve in up to 14% of cases [48]. In consultation with the anesthesiologist, the possibility was considered that the RNP could be a complication of the nerve block.PNI is an extremely rare complication of regional anesthesia with an incidence generally found to be ≤1% [49].High injection pressure, neurotoxicity of the local anesthesia drug, or direct injury from the needle, can be causative for injury [49,50]. For nerve lesions treated with neurolysis, the remaining question is how the treatment affected the outcome, as the possibility exists that the nerve could have improved without the surgery [51]. When assessing the situation, it is important to consider the patient's quality of life.Nerve injury with associated pain or paralysis can have a severe impact on their social and professional life.The inability to perform useful movements with the limb leads to under-usage of said extremity, which may lead to joint stiffness, further prolonging rehabilitation and disability.Long-standing injuries are often accompanied by anxiety and depression and the treating physician needs to be aware of these psychological impacts.Effective communication and a strong physician-patient relationship are necessary throughout the process of recovery [29]. Conclusions IMN without ABG may be considered a viable option for the surgical treatment of hypertrophic humeral shaft nonunion, as this type of nonunion has adequate biological potential for healing, rendering bone grafting superfluous in some cases.Further independent statistical analysis with a larger sample of patients and the application of reliability criteria is necessary.Before surgery, patients need to be counseled about the risk of developing post-operative RNP. The decision between early surgical intervention or expectant management of RNP should be made in close cooperation between the patient and physician.It needs to be an informed decision by the patient, taking into consideration the patient's needs, wishes, and general physical and mental health.The timing of radial nerve exploration remains controversial, with some advocating for early exploration to prevent irreversible nerve damage.Carefully designed studies are required to account for the numerous factors that could influence nerve recovery, such as patient factors, type of surgical fixation, timing and type of surgical exploration, and rehabilitation. Figure 1 . Figure 1.Pre-operative X-ray of the left humerus.Left: antero-posterior view, and right: latero-lateral view, showing hypertrophic nonunion of the humeral shaft fracture with the presence of deteriorated osteosynthesis material. Figure 1 . Figure 1.Pre-operative X-ray of the left humerus.Left: antero-posterior view, and right: latero-lateral view, showing hypertrophic nonunion of the humeral shaft fracture with the presence of deteriorated osteosynthesis material. Figure 2 . Figure 2. Postoperative X-ray of the left humerus.Left: antero-posterior view, and right: latero-lateral view, showing the IMN fixation. Figure 2 . Figure 2. Postoperative X-ray of the left humerus.Left: antero-posterior view, and right: latero-lateral view, showing the IMN fixation. Figure 3 . Figure 3. NCS on tenth postoperative day.Left: Sensory NCS (sNCS) left ulnar vs. left radial nerve, and right: motor NCS (mNCS) left ulnar vs. left radial nerve.Aspect of EPS on the tenth day after humeral nonunion surgery. Figure 3 . Figure 3. NCS on tenth postoperative day.Left: Sensory NCS (sNCS) left ulnar vs. left radial nerve, and right: motor NCS (mNCS) left ulnar vs. left radial nerve.Aspect of EPS on the tenth day after humeral nonunion surgery.
7,725.2
2024-09-15T00:00:00.000
[ "Medicine", "Engineering" ]
Mechanism of chiral proofreading during translation of the genetic code The biological macromolecular world is homochiral and effective enforcement and perpetuation of this homochirality is essential for cell survival. In this study, we present the mechanistic basis of a configuration-specific enzyme that selectively removes D-amino acids erroneously coupled to tRNAs. The crystal structure of dimeric D-aminoacyl-tRNA deacylase (DTD) from Plasmodium falciparum in complex with a substrate-mimicking analog shows how it uses an invariant ‘cross-subunit’ Gly-cisPro dipeptide to capture the chiral centre of incoming D-aminoacyl-tRNA. While no protein residues are directly involved in catalysis, the unique side chain-independent mode of substrate recognition provides a clear explanation for DTD’s ability to act on multiple D-amino acids. The strict chiral specificity elegantly explains how the enriched cellular pool of L-aminoacyl-tRNAs escapes this proofreading step. The study thus provides insights into a fundamental enantioselection process and elucidates a chiral enforcement mechanism with a crucial role in preventing D-amino acid infiltration during the evolution of translational apparatus. DOI: http://dx.doi.org/10.7554/eLife.01519.001 Introduction The origin of homochirality in biological macromolecules has been a subject of active research and intense debate till date (Podlech, 2001;Blackmond, 2010). With the selection of only L-amino acids (L-aas) for incorporation in proteins, effective enforcement and perpetuation of homochirality became essential for an efficient translational machinery to be a part of living systems. To this end, multiple checkpoints ensure that only L-aas are incorporated during translation. These include aminoacyl-tRNA synthetases (aaRSs), elongation factor Tu (EF-Tu) and ribosome (Jonak et al., 1980;Pingoud and Urbanke, 1980;Bhuta et al., 1981;Yamane et al., 1981;Ban et al., 2000;Agmon et al., 2004;Ogle and Ramakrishnan, 2005). Many aaRSs possess proofreading modules that remove similar non-cognate L-aas mistakenly attached to tRNAs and thus ensure fidelity of translation (Nureki et al., 1998;Silvian et al., 1999;Dock-Bregeon et al., 2004). However, a freestanding enzyme D-aminoacyl-tRNA deacylase (DTD) removes D-amino acids (D-aas) mischarged on tRNAs and ensures that D-aas do not get incorporated into proteins (Calendar and Berg, 1967;Wydau et al., 2009;Zheng et al., 2009). Since DTDs act in trans as freestanding modules, they are most likely to operate through resampling by recapturing aminoacyl-tRNAs (aa-tRNAs) from EF-Tu (Ling et al., 2009). A DTD-like fold has been found appended to archaeal threonyl-tRNA synthetase (ThrRS) where it removes mischarged L-serine from tRNA Thr (Dwivedi et al., 2005;Hussain et al., 2006Hussain et al., , 2010. The structure of archaeal ThrRS editing domain from Pyrococcus abyssi (Pab-NTD) not only highlighted the evolutionary link between DTD and Pab-NTD but also suggested the probable role this fold might have played in enforcement of homochirality during early evolution of translational machinery, since weakly discriminating primordial aaRSs would have been less enantioselective (Dwivedi et al., 2005). Even some of the highly evolved present day aaRSs have been shown to be inherently weak in enantioselection, leading to the formation of D-aminoacyl-tRNAs (D-aa-tRNAs) (Calendar and Berg, 1966;Soutourina et al., 2000b). D-aa-tRNAs thus formed could either get incorporated into the growing polypeptide chain leading to global misfolding or get accumulated in the cell leading to depletion of tRNA pool. Either way, decoupling of D-aa from tRNA is extremely important which makes the cellular role of DTD crucial. DTD activity was originally identified in 1967 by Calendar and Berg and the function is conserved in all organisms including humans (Calendar and Berg, 1967;Zheng et al., 2009). So far three distinct types of DTDs have been reported. The most commonly found canonical DTD has been shown to be present in most bacteria and all eukaryotes (Soutourina et al., 1999). Archaea, on the other hand, lack canonical DTD sequence in their genomes and instead possess another structurally unrelated protein which carries out the function of deacylating D-aa-tRNAs (Ferri-Fioni et al., 2006). This functional equivalent of DTD has been termed DTD2 and it is found in archaea and plants (Wydau et al., 2007). The third type of DTD, known as DTD3, has been reported in some cyanobacteria that lack both canonical DTD and DTD2 (Wydau et al., 2009). Overall, the universal distribution of DTD function across the three domains of life clearly suggests an essential role DTDs must have played and continue to play in enforcing homochirality. From here on, DTD would refer to the canonical DTD found in bacteria and eukaryotes unless otherwise mentioned. The DTD sequence is highly conserved among prokaryotes and eukaryotes with the sequence identity between Escherichia coli and Homo sapiens being 39%. The biological significance of DTD has been shown in both prokaryotes and eukaryotes with deletion of dtd gene leading to reduced tolerance to several D-aas in a dose-dependent manner (Soutourina et al., 2000a(Soutourina et al., , 2000b(Soutourina et al., , 2004Zheng et al., 2009). DTD is ubiquitously expressed and shows high levels of expression in the human neuronal cells, which are abundant in D-aas, thus strongly indicating a critical role of DTD (Zheng et al., 2009). Mechanistically, the most remarkable challenge that DTD faces is to specifically act on multiple D-aa-tRNAs while rejecting L-aminoacyl-tRNAs (L-aa-tRNAs) without any specificity for either the amino acid or the tRNA. This can be seen from the fact that DTD is able to act on diverse substrates such as Tyr, Phe, Asp, and Trp as long as they carry a D-configuration of the amino acid on tRNA (Calendar eLife digest Amino acids are 'chiral' molecules that come in two different forms, called D and L, which are mirror images of each other, similar to how our left and right hands are mirror images of each other. However, only one of these forms is used to make proteins: the more abundant L-amino acids are linked together to make proteins, whereas the scarcer D-amino acids are not. This 'homochirality' is common to all life on Earth. The molecular machinery inside cells that manufactures proteins involves many enzymes that carry out different tasks. Among these is an enzyme called DTD (short for D-aminoacyl-tRNA deacylase), which prevents D-amino acids being incorporated into proteins. To do this, DTD must be able to recognise and remove the D forms of many different amino acids before they are taken to the growing protein by transfer RNA molecules. However, the details of this process are not fully understood. To investigate this mechanism, Ahmad et al. made crystals of the DTD enzyme in complex with a molecule that mimics a D-amino acid attached to a transfer RNA molecule. By studying this structure at a high resolution, Ahmad et al. were able to identify how the active site of DTD can specifically accommodate the 'chiral centre' of a complex made of a D-amino acid and a transfer RNA molecule. DTD is able to recognize D-amino acids because of a critical dipeptide that is inserted from one subunit of the DTD into the active site of another subunit of the enzyme. The effect of this dipeptide is to generate a binding pocket that is a perfect fit for the chiral centre of a complex that contains a D-amino acid and a transfer RNA molecule. Moreover, this pocket specifically excludes complexes that contain an L-amino acid. The crucial parts of DTD that form the binding pocket are highly conserved-that is, they are the same in a wide variety of organisms, from bacteria to mammals. This conservation suggests that DTD is crucial for ensuring homochirality throughout all forms of life. Intriguingly, DTD is particularly highly expressed in neurons which are abundant in D-amino acids: this indicates that the DTD enzyme has an important physiological role, which will certainly be the focus of future work. DOI: 10.7554/eLife.01519.002 and Berg, 1967;Soutourina et al., 2000b). The problem is further compounded by the very high excess of L-aa-tRNA over D-aa-tRNA in the cellular milieu and warrants a stringent D-configuration specificity to avoid depletion of L-aa-tRNA pool. Although biochemical studies have indicated its configurational preference, the mechanistic basis of this fundamental process remained elusive due to lack of a cognate substrate-bound complex structure. The first crystal structure of DTD from E. coli (EcDTD) was solved in the apo form, which identified this novel DTD-like fold (Ferri-Fioni et al., 2001). Later, the apo structures of DTD from Haemophilus influenzae (Lim et al., 2003), Aquifex aeolicus (PDB id: 2DBO) and H. sapiens (Kemp et al., 2007) also became available. In the absence of any ligand-bound structure, docking studies were done with H. influenzae DTD in an attempt to understand its mechanism (Lim et al., 2003). Recently, the structure of Plasmodium falciparum DTD (PfDTD) was solved in complex with ADP and multiple free D-aas (Bhatt et al., 2010). Although these studies had proposed a catalytic mechanism implicating the role of a Thr residue, the structural basis of DTD's strict enantioselectivity was not clear. In this study, we report the mechanism of this crucial process with the help of high resolution structures of PfDTD in complex with a substrate-mimicking analog. We further validate the mechanistic proposal with the help of biochemical assays conducted on PfDTD as well as EcDTD and NMR-based binding studies with PfDTD. The work identifies the essential role of a universally conserved 'cross-subunit' Gly-cisPro motif in providing exclusive enantioselectivity to the enzyme thus ensuring homochirality during translation. Results Co-crystal structure of PfDTD with D-Tyr3AA PfDTD was co-crystallized with a post-transfer substrate analog D-Tyr3AA, which mimics D-tyrosine attached to the 3′-OH of the terminal adenosine (A76) of tRNA ( Figure 1). The ester linkage between amino acid and adenosine is replaced by an amide linkage to make it non-hydrolyzable. Similar posttransfer substrate analogs have been used extensively to study proofreading mechanisms in atomic details for both Class I-specific CP1 editing domains and Class II-specific editing domains (Lincecum Dock-Bregeon et al., 2004;Fukunaga and Yokoyama, 2006;Hussain et al., 2006Hussain et al., , 2010.The crystal structure of PfDTD in complex with D-Tyr3AA has been solved in two different crystal forms: crystal form I at a resolution of 1.86 Å in C2 space group and crystal form II at a resolution of 2.2 Å in P2 1 space group (Table 1). Crystal forms I and II have two and eight copies per asymmetric unit, respectively. This provides us with 10 independent observations of the ligand in the active site ( Figure 2-figure supplement 1). Since all copies present a similar picture, the higher resolution crystal form I is discussed here unless otherwise mentioned (Figure 2A). The enzyme is a symmetric dimer with two active sites per dimer that are located at the dimeric interface ( Figure 2B,C). The residues defining the active site pocket span the conserved -SQFTL-motif from one monomer and the -NXGP(V/F)T-motif from the other. The D-Tyr3AA-bound structure superimposes on the apo structure (PDB id: 3KNF) with an r.m.s.d. of 0.41 Å for 260 Cα atoms (Figure 2-figure supplement 2). However, there are subtle rearrangements of the active site region upon ligand binding, indicating the plasticity associated with the active site ( Figure 2D). The most noticeable movements occur in Phe89, Phe137 and Gly138 upon accommodation of D-Tyr3AA making the active site more compatible for substrate binding ( Figure 2D). Adenosine binding and catalytic mechanism The active site of DTD uses, in a major way, the main chain atoms to interact with the substrate ( Figure 2E). The main chain atoms of Lys107 and Ile43 have direct and water-mediated interactions with the adenine moiety. An invariant Phe137 provides base-stacking interaction to the adenine ring. The main chain nitrogen of Gly138 along with the side chain hydroxyl of Ser87 holds the 2′-OH. The 5′-OH projects outwards as should be expected since it would be attached to the preceding nucleotide (C75) in the actual substrate, which is D-aa-tRNA. Considering that Pab-NTD, which is a structural homolog of DTD ( Figure 2-figure supplement 3), also interacts with the substrate mostly through main chain atoms, it appears to be a conserved feature of this fold to employ main chain atoms extensively for ligand binding (Figure 2-figure supplement 4) (Hussain et al., 2006(Hussain et al., , 2010. Moreover, the adenosine-binding pocket is highly conserved in this DTD-like fold with an invariant Phe providing base-stacking interaction (Phe117 in Pab-NTD and Phe137 in PfDTD) as shown in Figure 2-figure supplement 4. To prove that the ligand complex we have obtained is a biologically relevant one, we disrupted the adenine-binding pocket with the help of mutations and showed that it leads to complete loss of activity. As shown in Figure 3A, Phe137 that stacks with the adenine base was mutated to Ala. In another mutant, we blocked the adenine pocket by mutating a conserved Ala112 to a bulkier Phe ( Figure 3A). Both F137A and A112F mutations resulted in a complete loss of activity, confirming that the adenosine-binding pocket identified here indeed represents the bona fide functional site ( Figure 3B). The corresponding mutations F125A and A102F in EcDTD were also tested for their activity against D-Tyr-tRNA Tyr . These mutants in EcDTD also showed a complete loss of activity ( Figure 3-figure supplement 1B), further substantiating the biological relevance of the substrate-binding pocket identified here. To delineate the catalytic mechanism, we looked for all the amino acid side chains located within a distance of 6 Å from the susceptible bond of the substrate, that is the bond between adenosine and the carbonyl group of D-tyrosine. These residues include Ser87, Gln88, Phe89, Thr90, Met141, and Pro150. Out of these, the residues that can chemically contribute to catalysis are Ser87, Gln88, and Thr90, which are positioned at a distance of 5.71 Å, 3.56 Å, and 5.72 Å respectively from the carbonyl carbon of the substrate ( Figure 3C). To probe the role played by these residues in catalysis, we generated mutants S87A, S87P, Q88A, Q88N, Q88E, T90A, and T90S, and tested them for deacylation activity. All mutants deacylated D-Tyr-tRNA Tyr as efficiently as the wild type PfDTD, except S87A that showed partly compromised activity ( Figure 3B, Figure 3-figure supplement 1A). Although S87A was only moderately active, the fact that S87P retains complete activity rules out any catalytic role for this residue. Therefore, even though ). 500 pM enzyme concentration was used for the assays. (C) Stereoscopic image showing all the protein side chains within 6 Å of the susceptible bond of the substrate. A water molecule has been modeled based on Pab-NTD complex structure. The water is positioned at a distance of 2.61 Å from the 2′-OH and 2.79 Å from the scissile bond of D-Tyr3AA. In the absence of any protein side chain playing a role in catalysis, a substrate-assisted mechanism is proposed involving the role of 2′-OH of tRNA in activating a water molecule as suggested in case of Pab-NTD. DOI: 10.7554/eLife.01519.014 The following figure supplements are available for figure 3: Ser87 interacts with 2′-OH of the ribose, it seems to perform a space-filling function of maintaining the ribose in an active conformation. It is also worth noting here that in some DTDs from different organisms, Ser87 is naturally substituted by a Pro, which further proves that the side chain chemistry of this residue is not essential for catalysis. The catalytic role of other protein residues Gln88 and Thr90 can also be ruled out as Q88A, Q88N, Q88E, T90A, and T90S mutants deacylated D-Tyr-tRNA Tyr as efficiently as the wild type ( Figure 3B, Figure 3-figure supplement 1A). Strikingly, Thr90 was identified from the modeling studies (Lim et al., 2003) as a crucial residue responsible for catalysis, as discussed further in a later section. However, mutating this residue did not at all affect the activity of the enzyme. The corresponding mutants S77A, S77P, Q78A, and T80A in EcDTD were also tested for their deacylation activity. In the case of EcDTD, all mutants including S77A deacylated D-Tyr-tRNA Tyr as efficiently as the wild type (Figure 3-figure supplement 1B). The above data suggest that none of the protein residues around the scissile bond are involved in catalysis. Our earlier structural studies on Pab-NTD have suggested an RNA-assisted catalytic mechanism implicating the role of 2′-OH in activating a water molecule for catalysis (Hussain et al., 2006(Hussain et al., , 2010. Subsequently, the catalytic role of RNA in proofreading has also been experimentally shown in the case of phenylalanyl-tRNA synthetase (PheRS) (Ling et al., 2007). Unlike in the case of PheRS, the catalytic role of RNA in DTD could not be directly probed with a modified tRNA having a terminal 2′-deoxyadenosine since tyrosyl-tRNA synthetase (TyrRS) attaches the amino acid on 2′-OH of the ribose, which is then transesterified to 3′-OH for proofreading reaction. As we show later, this transesterification is required for DTD to act since it is expected to recognize aminoacyl moiety only when it is attached to the 3′-OH. A comparison of non-cognate and cognate substrate analog-bound structures of Pab-NTD had revealed that the space available in the reaction zone is crucial for catalysis. It was shown that upon cognate substrate binding this space is constricted due to a subtle movement of a crucial Lys side chain (Hussain et al., 2010). This limited space, therefore, does not allow the putative catalytic water molecule to be accommodated in that site as it would have serious short contacts, and hence no deacylation. Although we do not observe a water molecule in that region in DTD, there is enough space available for a water molecule to be positioned without any clashes. Furthermore, it is worth noting here that the site of catalysis in DTD is much more accessible to the external bulk solvent as compared to Pab-NTD and could be a plausible reason as to why we do not observe the water molecule crystallographically. Therefore, considering the structural similarity and conservation of substrate-binding modes between DTD and Pab-NTD along with the experimental evidence showing the absence of any direct role of protein side chains in the catalytic mechanism, we propose a similar RNAassisted catalysis in DTD also ( Figure 3C). The 2′-OH of the terminal ribose would activate a water molecule, which in turn makes a nucleophilic attack on the carbonyl carbon of the substrate. The resultant tetrahedral transition state would be stabilized by the oxyanion hole formed by main chain nitrogen atoms of Phe89 and Thr90 situated at a distance 3.03 Å and 4.05 Å respectively from the carbonyl oxygen of the substrate. It would then result in the subsequent cleavage of the ester bond between the D-aa and the tRNA. Therefore, taken together with studies on Pab-NTD and the primordial nature of its fold and function, the above data indicate that the DTD fold is an RNA-based catalyst in the proofreading reaction. Enantioselection mechanism A striking feature of the amino acid recognition site is the capture of all the atoms attached to the chiral centre Cα and the role of cross-subunit interactions, particularly a Gly-cisPro motif from both monomers inserted into the active site of the dimeric counterpart that plays a central role in the recognition mechanism, as described in 'Mechanism of L-amino acid rejection from the active site'. The aminoacyl moiety has interactions with residues from both monomers. The carbonyl oxygen interacts with the main chain nitrogen of Phe89 and the side chain amide of Gln88. Both the residues belong to the -SQFTL-motif. The α-amino group of D-tyrosine has an interaction with carbonyl oxygen of Gly149 from the cross-subunit Gly-cisPro motif. Such a capture of the carbonyl oxygen and the amino group of the incoming D-aa, automatically positions the Cβ in such a way that it makes favorable C-H … O hydrogen bond with the carbonyl oxygen of Pro150, again from the cross-subunit Gly-cisPro motif. In addition, the Cα also makes a weak C-H … N bond with the Gln88 side chain amide nitrogen. The interaction distances of the aminoacyl moiety have been summarized in Supplementary file 1A. With this mode of recognition of the configuration, the side chain of D-tyrosine is positioned in such a way that it projects out of the binding pocket and has no interaction beyond the Cβ atom as seen in Figure 2C. The atomic B-factors of the ligand clearly show a sharp rise in the side chain atoms beyond the Cβ ( Figure 2C, Figure 2-figure supplement 5, Supplementary file 1B). The superimposition of all the copies of ligand from both the crystal forms I and II shows considerable deviations in only the side chain atoms beyond Cβ (Figure 2-figure supplement 6). The lack of recognition of side chain atoms indicates that residues with different side chain chemistries and sizes are treated alike. Such a side chain-free recognition mechanism provides the basis for how nature has designed a single deacylase to deal with any D-aa-tRNA and reveals the crucial role played by weak hydrogen bonds in D-chirality selection. Mechanism of L-amino acid rejection from the active site If an L-aa was to bind in this pocket, it would have to do so in one of the three theoretically possible conformations shown in Figure 4. In conformation I, where the side chain swaps positions with Hα, it would result in serious clashes with several atoms in the binding pocket ( Figure 4C). Even the Cβ of L-Tyr would have short contacts of 3.08 Å with the Cδ and 2.69 Å with the carbonyl oxygen of Pro150. In conformation II, the side chain would occupy the place of the amino group ( Figure 4D). In this position it would be placed adjacent to 5′-OH and would therefore have short contacts with the preceding nucleotide (C75). In fact, the Cβ itself would have a short contact (2.56 Å) with the amide nitrogen of the substrate (ester oxygen in the real substrate). It should be highlighted here that the side chain rejection in both positions occurs at the Cβ level itself, which implies that an amino acid with even a minimal side chain like L-Ala will be rejected from occupying these two positions. In the third possibility of conformation III, the amino group would swap its position with Hα ( Figure 4E). In this case, in addition to losing its hydrogen bonding interaction with Gly149 carbonyl oxygen, the amino group would be placed also in an unfavorable environment at a distance of 3.07 Å from the Cδ atom of the non-polar side chain of Pro150 ( Figure 4E). This provides an elegant mechanistic design for L-chirality rejection from this pocket irrespective of the conformation and side chain chemistry of the incoming substrate. The rejection mechanism also rules out any other possible mode of D-aa binding than the one observed where the side chain is kept protruding out (Figure 4-figure supplement 1). The 'cross-subunit' Gly-cisPro motif plays a central role in the rejection of L-aas from binding in the pocket. The cis conformation of Pro150 is the key to ensuring that it cradles the chiral centre thus preventing both the amino group and the Cβ from occupying the position of Hα ( Figure 5A). To facilitate this rejection mechanism, Pro150 side chain is positioned rigidly in cis conformation by a conserved hydrophobic base formed by Phe40, Val86, Ile143, and the DTD-specific invariant Met141 ( Figure 5-figure supplement 1). The Gly149 and Pro150 carbonyl oxygens make H-bond interactions with the α-amino group and the Cβ of the substrate respectively, thereby reinforcing the binding of D-aa in the pocket. Both the carbonyl oxygens are also positioned tightly by cross-subunit interactions with Met141 main chain nitrogen and Gln88 side chain nitrogen, respectively ( Figure 5A). The structure, therefore, suggests a strict rejection of L-aas from the pocket, enabling DTD to specifically remove only D-aas coupled to tRNAs. Conservation of the strict configuration specificity across species In order to prove the strict rejection of L-aa by the active site of DTD, biochemical analyses with PfDTD were performed. Although significant deacylation activity against D-Tyr-tRNA Tyr was observed at 500 pM PfDTD, no L-Tyr-tRNA Tyr deacylation was found even with 1000-fold higher enzyme concentration at 500 nM ( Figure 5B). Furthermore, to rule out the possibility of any Plasmodium-specific phenomenon and to test the universal nature of the rejection mechanism, we carried out deacylation experiments with EcDTD as well. Similar to PfDTD, EcDTD showed significant deacylation of D-Tyr-tRNA Tyr with 50 nM enzyme, whereas no detectable L-Tyr-tRNA Tyr deacylation was seen even at 5 μM ( Figure 5C). Biochemical studies with both enzymes not only confirm the stringent chiral specificity of this key process but also suggest conservation of the mechanism across species. Strict rejection of L-aa-tRNA as seen with NMR-based binding studies We further probed the enantiomeric rejection mechanism in solution using NMR-based 2D 15 N-1 H Transverse Relaxation Optimized Spectroscopy (TROSY) experiments with a nonhydrolyzable analog mimicking L-Tyr attached to tRNA Tyr , L-Tyr3AA, and compared it with D-Tyr3AA. Titration of 15 N-PfDTD with D-Tyr3AA at molar ratios of 1:0, 1:5, 1:10, and 1:15 led to chemical shift perturbations in a number of resonances and showed saturation around 1:15, thereby clearly indicating a specific binding to PfDTD ( Figure 5D, Figure 5-figure supplement 2). On the other hand, L-Tyr3AA titration did not cause any change in the amide resonances of 15 N-PfDTD even up to 1:15 molar ratio, highlighting a complete lack of specific binding ( Figure 5D, Figure 5-figure supplement 2). Thus, the 2D 15 N-1 H TROSY studies further confirmed the strict rejection of L-aa from the active site of DTD. 2′-vs 3′-deacylase Another important mechanistic aspect that is clearly evident from this structure is that DTD acts exclusively on D-aas charged on 3′-OH of the terminal adenosine. aaRSs aminoacylate tRNAs at either 2′-OH or 3′-OH in a class-dependent way (Eriani et al., 1990). Biochemical studies have revealed deacylation mechanism of DTD against aa-tRNA pairs belonging to both classes of aaRS. However, it was not clear whether DTDs would act on D-aas linked to 2′-OH or 3′-OH or both. The structure shows that the 2′-OH is positioned in a confined area with the help of tight interactions with Gly138 main chain nitrogen and Ser87 side chain hydroxyl group. Modeling even the simplest of amino acids on the 2′-OH shows severe steric clashes irrespective of the ribose pucker ( Figure 6A-C). We have further confirmed this mechanistic proposal using 2D 15 N-1 H TROSY experiments. Titration of 15 N-PfDTD with D-Tyr3AA showed chemical shift perturbations for a number of resonances ( Figure 5D, Figure 5-figure supplement 2). On the other hand, titration with D-Tyr2AA (analog of D-tyrosine bound to 2′-OH of adenosine) resulted in no observable chemical shift perturbations ( Figure 6D,E). This confirms that the enzyme acts on tRNAs only when the amino acid is either attached to 3′-OH or transferred to 3′-OH from 2′-OH through rapid transesterification. A similar mechanistic mode of operation of Pab-NTD delineates this DTD-like fold as a 3′-specific deacylase enzyme (Hussain et al., 2006(Hussain et al., , 2010. Research article Gly-cisPro motif is essential for function The mechanistic understanding based on the cognate substrate analog-bound structure suggests a crucial role for the cross-subunit Gly-cisPro motif in enantioselectivity and rejection of L-aas from the pocket. To experimentally demonstrate the crucial role of this unique motif for DTD function, we carried out deacylation assays with PfDTD by mutating these two critical residues. A complete loss of activity was observed for both G149A and P150A mutants ( Figure 7A). We also carried out deacylation assay with G149A/P150A double mutant and similar to both single mutants, it showed a total loss of activity ( Figure 7A). The biochemical studies, thus clearly, show that Gly-cisPro motif is essential for DTD function. We further wanted to ensure that the observation is not Plasmodium-specific. Therefore, we performed the same biochemical study with the mutants of EcDTD to ensure that the critical role of the Gly-cisPro motif is universal. Similar to PfDTD, both G137A and P138A mutants of EcDTD showed a complete loss of deacylation function ( Figure 7B). We also tested G137A/P138A double mutant for deacylation function and it also showed no activity like the individual point mutants ( Figure 7B). The biochemical analyses with the mutants of both PfDTD and EcDTD prove the critical role played by the unique Gly-cisPro motif in DTD function and also suggests the universality of its crucial role irrespective of the organism. Discussion The study provides insights into a fundamental enantioselective mechanism involved in enforcement of homochirality in proteins by specifically decoupling D-aas from tRNAs. The earlier structural studies on DTD provided a mechanistic model based either on docking approaches using apo structure (Lim et al., 2003) or complex structures with ligands that do not mimic the cognate substrate (Bhatt et al., 2010). A superposition of the earlier known structures with that of the D-Tyr3AA-bound complex presented here shows that the docked substrate as well as the free D-aas and ADP were positioned outside the actual binding pocket (Figure 1-figure supplement 1). Therefore, the key to identifying the mechanism, as seen from this study, is the capturing of the D-Tyr3AA ligand that is bound in the actual substrate-binding pocket. An analysis of all known structures of proofreading domains in complex with post-transfer substrate analogs helped us to define certain parameters such as percentage buried surface area of the ligand, number of interactions, conservation of interacting residues etc that can be used to assess the binding characteristics of ligand complexes (Figure 1-figure supplement 2, Supplementary file 1C). Comparison of these parameters from all known complex structures of proofreading domains with the structure presented in the current study places our structure in the same bracket as the other wellstudied proofreading domains (Supplementary file 1C). We have mutated Phe89 that has been shown to stack with adenine in ADP-complex (Bhatt et al., 2010), to Ala and show that the mutant is as active as the wild-type PfDTD (Figure 3-figure supplement 2A,C). The corresponding mutant F79A in EcDTD was also completely active suggesting that the Phe has no significant role in binding the adenine (Figure 3-figure supplement 2D). Furthermore, the earlier work had implicated a conserved Thr90 as the catalytic residue that was proposed to mount a nucleophilic attack on the carbonyl carbon of the substrate (Lim et al., 2003;Bhatt et al., 2010). However, our analysis clearly shows that not only the distance (5.72 Å) of γ-hydroxyl group of Thr90 from the carbonyl carbon is unfavorable for any nucleophilic attack but also it is oriented away from the point of attack where it is strongly tethered to Thr152 main chain atoms through a highly conserved cross-subunit interaction (Figure 3-figure supplement 2B). To experimentally demonstrate that Thr90 is not the catalytic residue as had been proposed earlier, we mutated this residue to Ala in both PfDTD and EcDTD, and showed that they still efficiently deacylated D-Tyr-tRNA Tyr (Figure 3-figure supplement 2C,D). These data, therefore, rule out the earlier propositions not only with respect to the adenosine-binding site but also the catalytic mechanism. More importantly, the current study identifies the key role of an invariant cross-subunit Gly-cisPro motif in solving a fundamental problem of absolute configuration-based selectivity. The most striking feature of the Gly-cisPro motif is the near-parallel fixation of the two carbonyl groups at an angle of ∼20°, a highly conserved structural feature in DTDs irrespective of the presence or absence of ligand as seen in 72 different observations (including 10 from this study) from five different organisms ( Figure 8A). The Ramachandran dihedral angles of both residues remarkably illustrate a striking conservation, which allows DTD to selectively recognize the chiral centre. It also provides a structural explanation for having an invariant Gly in that position as no other residue can normally lie in that region of Ramachandran map ( Figure 8B,C). Since the cellular milieu will be in abundance with L-aa-tRNAs, when compared to D-aa-tRNAs, such a positioning of the critical enantioselective components, as seen here, prevents even a promiscuous deacylation of L-aa-tRNAs leading to their depletion from the pool, as shown by the biochemical studies with 1000-fold excess of DTD in two different systems. The essential role of Gly-cisPro motif in chiral discrimination is also strongly indicated by its absolute invariance in all DTD sequences from eubacteria to higher eukaryotes ( Figure 8C). Previous work has shown the ability of L-proline to catalyze asymmetric synthesis of simple sugars leading to their enantioenrichment (Breslow and Cheng, 2010;Hein and Blackmond, 2012). Based on the work, there has been a proposal of a role of L-proline in symmetry-breaking during the prebiotic era. In the present work also, we show the critical role of a proline residue as a part of a motif in a process involved in enforcement of homochirality. Overall, the work has unveiled a fundamental cellular mechanism that is responsible for enforcing and perpetuating L-aa homochirality in proteins. A mechanistically unique solution to the problem of enantioselectivity employing two carbonyl oxygens from a 'cross-subunit' Gly-cisPro dipeptide has been shown to be responsible for D-chirality selection and strict L-chirality rejection from the active site of DTD. The conserved and indispensable nature of the motif in DTD argues strongly for its crucial role in solving this key chiral discrimination problem in biology. The presence of DTD-fold and function in all kingdoms of life suggests an important role such systems have played in enforcing homochirality during early evolution of the translational apparatus, and high levels of expression in neuronal cells indicate a crucial role of DTD in higher organisms, which still needs to be explored. Cloning, expression and protein purification The gene encoding DTD was PCR amplified from P. falciparum genomic DNA and inserted between NdeI and XhoI sites of pET-21b vector (Novagen, Billerica, MA). For untagged construct, a stop codon was incorporated in the reverse primer whereas in case of C-terminal 6X His-tagged (C-His) construct, there was no stop codon in the reverse primer. Untagged protein was used for crystallization and biochemical analysis, while NMR experiments were performed with C-His protein. The recombinant plasmid containing our gene of interest was transformed in E. coli BL21 (DE3) cells for overexpression. The untagged protein was purified by a two-step protocol including cation exchange chromatography (CEC) followed by gel filtration chromatography (GFC). In CEC, the induced cell lysate was loaded onto Sulfopropyl-Sepharose column (Amersham Pharmacia, UK) pre-equilibrated with 50 mM BisTris pH 6.5, 20 mM NaCl and then eluted in a linear gradient of NaCl from 20 mM to 500 mM. The eluted protein was further purified to homogeneity by GFC using a Superdex-75 column (Amersham Pharmacia). The final protein was concentrated to 10 mg/ml. EcDTD was purified as mentioned previously (Hussain et al., 2006). All proteins were expressed normally except for G137A and double mutant G137A/P138A of EcDTD, which were purified from inclusion bodies using the following procedure. After lysis, the inclusion bodies were washed thoroughly with buffer containing 1% Triton X-100, followed by 1% sodium deoxycholate wash and finally incubated overnight in unfolding buffer containing 6M guanidinium hydrochloride (GdmHCl). The unfolded protein was then loaded onto Ni-NTA column (Amersham Pharmacia) pre-equilibrated with unfolding buffer and subsequently washed with 1% Triton X-100, followed by 0.1% β-cyclodextrin wash. This was followed by 30 mM imidazole wash to get rid of any contaminant proteins. The protein was finally eluted with 250 mM imidazole and immediately diluted in refolding buffer containing 400 mM L-Arg. The protein was further purified to homogeneity using GFC. Circular Dichroism analysis was performed to ensure that the proteins were properly folded (Figure 7-figure supplement 1). Co-crystallization with substrate-mimicking analog D-Tyr3AA Co-crystallization was attempted with a number of constructs of DTD from E. coli, Mycobacterium tuberculosis, Vibrio cholera, Leishmania major but none of them yielded a ligand-bound structure. Successful co-crystallization was achieved only with PfDTD. The nonhydrolyzable analogs D-Tyr3AA, L-Tyr3AA, and D-Tyr2AA were obtained after custom synthesis from Jena Biosciences, Germany. The pure protein sample was mixed with the ligand in a molar ratio of 1:20 and the premix was incubated at 4°C overnight. Initial crystallization conditions were screened at 4°C and 20°C with Index and Crystal screen 1 and 2 (Hampton Research, Aliso Viejo, CA) and JBS classic (Jena Biosciences) in sitting drop setups using 96-well plates from MRC. The experiments were set up by mixing 1 μl of protein:ligand premix with 1 μl of reservoir buffer with the help of Mosquito crystallization robot (TTP LabTech, UK). The hits obtained were further optimized in a hanging drop vapor diffusion setup using 24-well Iwaki plates. PfDTD+D-Tyr3AA crystal I was obtained in 0.1M HEPES pH 7.0, 0.6M NaCl, 32% PEG3350, while crystal II of the same was obtained in 0.1M BisTris pH 6.0, 0.4 M NaCl, 28% PEG3350. X-ray diffraction data collection and structure determination The diffraction data were collected at the in-house X-ray facility after screening several hundreds of ligand complex crystals to get high resolution datasets. The dataset for PfDTD+D-Tyr3AA crystal I was collected using RigakuMicromax007 HF rotating-anode generator that produces CuKα X-rays of wavelength 1.54 Å and MAR345dtb image-plate detector from MAR Research. The crystal was mounted on a nylon loop and flash-cooled directly without the use of any cryoprotectant solution in a nitrogengas stream at 100 K using Oxford Cryostreamcooler (Oxford Cryosystems, UK). The dataset for PfDTD+D-Tyr3AA crystal II was collected using FR-E+ SuperBright X-ray generator from Rigaku equipped with VariMax HF optic and R-AXIS IV++ image plate detector. The data were processed using HKL2000 (Otwinowski and Minor, 1997) and the structure was solved by molecular replacement using MOLREP-AUTO MR from the CCP4 suite (CCP4, 1994) with PfDTD apo structure (PDB id: 3KNF) as the search model. The structure was refined with the help of CNS (Brunger et al., 1998) and REFMAC (Murshudov et al., 1997), while COOT (Emsley and Cowtan, 2004) was used for model building. The restraints for refinement of ligand molecules were obtained from PRODRG server (Schuttelkopf and van Aalten, 2004). The structure was validated using PROCHECK (Laskowski et al., 1993) and the figures were generated with the help of PyMOL (Schrodinger, 2010). Biochemical assays The mutants for biochemical assays were generated using QuickChange XL site-directed kit (Stratagene, La Jolla, CA) and the proteins were purified by the same protocol as for the wild type. E. coli tRNA Tyr was transcribed in vitro using MEGAshortscript (Ambion, Austin, TX) and 3' end-labeled using standard protocol by incubating the tRNA with CCA-adding enzyme in presence of [α-32 P]-ATP (Ledoux and Uhlenbeck, 2008). D-Tyr-tRNA Tyr and L-Tyr-tRNA Tyr were generated by incubating 20 mM Tris pH 7.8, 7 mM MgCl 2 , 5 mM Dithiothreitol (DTT), 2 mM ATP, 0.2 mM amino acid (D-Tyr or L-Tyr), 0.5 μM labeled tRNA Tyr , 1 U/ml pyrophosphatase with 2 μM purified E. coli TyrRS at 37°C for 15 min. Aminoacylation reaction was followed by phenol extraction and ethanol precipitation of aminoacylated tRNA, which was finally resuspended in 5 mM sodium acetate pH 4.6. Deacylation assays were performed by incubating 20 mM Tris pH 7.2, 5 mM MgCl 2 , 5 mM DTT, 0.2 mg/ml bovine serum albumin (BSA), 0.2 μM labeled D-Tyr-tRNA Tyr or L-Tyr-tRNA Tyr at 30°C with 500 pM of PfDTD and 50 nM of EcDTD or the mutants enzyme as the case may be. Reaction mix at various time points were subjected to S1 nuclease digestion for 30 min at 22°C and analyzed by thin-layer chromatography (TLC) by spotting 1 μl on PEI cellulose sheet (Merck KGaA, Germany). An example of a TLC run has been shown in Figure 5-figure supplement 3. The mobile phase for TLC was composed of 100 mM ammonium chloride and 5% glacial acetic acid. TLC sheets were exposed to imaging plate from Fujifilm, Japan. Phosphor imaging was done using Typhoon Trio Variable Mode Imager (Amersham Biosciences, Piscataway, NJ) and Image Gauge V4.0 software was used for quantification. Each experiment was carried out in triplicates. Transverse relaxation optimized NMR spectroscopy 2D 15 N-1 H TROSY experiments were performed on a Bruker 600 MHz NMR spectrometer equipped with triple resonance cryoprobe (Bruker, Billerica, MA). C-His construct of PfDTD was expressed in minimal media with 15 NH 4 Cl as the sole nitrogen source in order to achieve uniform labeling. The protein was purified by affinity chromatography using Ni-NTA column in batch mode. For binding studies, 200 μM U-15 N-PfDTD in 50 mM HEPES pH 7.0, 50 mM NaCl was titrated with substrate analogs. Chemical shift perturbations in PfDTD upon titration were monitored by a series of 2D 15 N-1 H TROSY spectra collected with increasing concentrations of ligand. Four datasets were recorded for each ligand at protein:ligand molar ratios of 1:0, 1:5, 1:10, and 1:15. The experiments were repeated twice with two different batches of protein. The data processing and figure preparation were done using Sparky.
9,394
2013-12-03T00:00:00.000
[ "Biology", "Chemistry" ]
Flex-Printed Ear-EEG Sensors for Adequate Sleep Staging at Home A comfortable, discrete and robust recording of the sleep EEG signal at home is a desirable goal but has been difficult to achieve. We investigate how well flex-printed electrodes are suitable for sleep monitoring tasks in a smartphone-based home environment. The cEEGrid ear-EEG sensor has already been tested in the laboratory for measuring night sleep. Here, 10 participants slept at home and were equipped with a cEEGrid and a portable amplifier (mBrainTrain, Serbia). In addition, the EEG of Fpz, EOG_L and EOG_R was recorded. All signals were recorded wirelessly with a smartphone. On average, each participant provided data for M = 7.48 h. An expert sleep scorer created hypnograms and annotated grapho-elements according to AASM based on the EEG of Fpz, EOG_L and EOG_R twice, which served as the baseline agreement for further comparisons. The expert scorer also created hypnograms using bipolar channels based on combinations of cEEGrid channels only, and bipolar cEEGrid channels complemented by EOG channels. A comparison of the hypnograms based on frontal electrodes with the ones based on cEEGrid electrodes (κ = 0.67) and the ones based on cEEGrid complemented by EOG channels (κ = 0.75) both showed a substantial agreement, with the combination including EOG channels showing a significantly better outcome than the one without (p = 0.006). Moreover, signal excerpts of the conventional channels containing grapho-elements were correlated with those of the cEEGrid in order to determine the cEEGrid channel combination that optimally represents the annotated grapho-elements. The results show that the grapho-elements were well-represented by the front-facing electrode combinations. The correlation analysis of the grapho-elements resulted in an average correlation coefficient of 0.65 for the most suitable electrode configuration of the cEEGrid. The results confirm that sleep stages can be identified with electrodes placement around the ear. This opens up opportunities for miniaturized ear-EEG systems that may be self-applied by users. A comfortable, discrete and robust recording of the sleep EEG signal at home is a desirable goal but has been difficult to achieve. We investigate how well flex-printed electrodes are suitable for sleep monitoring tasks in a smartphone-based home environment. The cEEGrid ear-EEG sensor has already been tested in the laboratory for measuring night sleep. Here, 10 participants slept at home and were equipped with a cEEGrid and a portable amplifier (mBrainTrain, Serbia). In addition, the EEG of Fpz, EOG_L and EOG_R was recorded. All signals were recorded wirelessly with a smartphone. On average, each participant provided data for M = 7.48 h. An expert sleep scorer created hypnograms and annotated grapho-elements according to AASM based on the EEG of Fpz, EOG_L and EOG_R twice, which served as the baseline agreement for further comparisons. The expert scorer also created hypnograms using bipolar channels based on combinations of cEEGrid channels only, and bipolar cEEGrid channels complemented by EOG channels. A comparison of the hypnograms based on frontal electrodes with the ones based on cEEGrid electrodes (κ = 0.67) and the ones based on cEEGrid complemented by EOG channels (κ = 0.75) both showed a substantial agreement, with the combination including EOG channels showing a significantly better outcome than the one without (p = 0.006). Moreover, signal excerpts of the conventional channels containing grapho-elements were correlated with those of the cEEGrid in order to determine the cEEGrid channel combination that optimally represents the annotated grapho-elements. The results show that the grapho-elements were well-represented by the front-facing electrode combinations. The correlation analysis of the grapho-elements resulted in an average correlation coefficient of 0.65 for the most suitable electrode configuration of the cEEGrid. The results confirm that sleep stages can be identified with electrodes placement around the ear. This opens up opportunities for miniaturized ear-EEG systems that may be self-applied by users. INTRODUCTION Capturing sleep stages in a medically meaningful way requires the infrastructure of a sleep laboratory and the placement of many sensors by well-trained personnel. Acquisition of electroencephalogram (EEG) data is the most fundamental part of a polysomnography (PSG). While the PSG administered in the sleep lab in controlled conditions continues to play an important role in sleep medicine, it has become clear that more options are needed to address different questions concerning sleep and sleep disorders. The German Society for Sleep Research and Sleep Medicine (DGSM) has long been drawing attention to the need for research with regard to simplified procedures to support sleep diagnostics with possible applications for at-home studies (1), as the ever-increasing demand for diagnostics leads to long waiting lists for sleep laboratories. Additionally, standard PSG in a sleep lab is obtrusive and may lead to atypical sleep patterns in a patient, which can complicate the diagnostic process (2). The consumer market has identified those experiencing trouble sleeping as customers. Many gadgets, devices, and apps are available and promise information on sleep duration, sleep quality or even sleep apnea. Yet, sleep researchers and practitioners alike questioned the validity of consumer-level sleep tracking devices (3,4). Most solutions are neither medical devices nor validated against standard procedures. Nevertheless, the marketing claims can be ambiguous, and customers may perceive the products as scientifically validated because of their appearance. Inaccurate feedback on one's sleep, however, can corrupt people's perception of their quality of sleep, worsening symptoms and hindering appropriate diagnosis and treatment. Negative feedback on sleep quality can harm daytime functioning and increase reported daytime fatigue, as tested in a sham experiment with insomniac patients (5). The development of accurate, low-threshold sleep monitoring solutions that could be self-applied and used at home may help to avoid those problems. Several research groups and for-profit companies have developed compact EEG sleep monitoring systems that may help to re-define how sleep EEG can be taken from the scalp in ways that are easy to apply without preparation and not disrupting to wear during sleep.They differ in placement (face, ear) as well as type and number of channels but have in common that they present ideas outside of the box of standard PSG, with some reporting promising results. Recent examples for facial solutions include self-applicable electrodes developed for emergency medicine (6, 7), a printed dry-electrode array applied to the face (8) and auto-adhesive electrodes attached with a headband (9). Devices focusing on the ear canal include both dry and wet in-ear electrodes that fit into a personalized earpiece (10,11) as well as in-ear sensors attached to a foam earplug (12). In the consumer market, several commercially available devices have been scientifically evaluated. Examples include sensor systems with single-use electrodes applied to the forehead (13,14) or dry electrodes in a headband, with brush-like silicon electrodes at the back of the head and flat dry electrodes on the forehead (15,16). These systems have in common that they offer all necessary hardware in a compact, easy-to-use setup. However, in many cases, data analysis runs on company-owned servers, and direct access to raw data is refused, which is incompatible with independent scientific evaluation attempts. In addition to the development of novel devices, a topdown approach to EEG may help identify simple electrode constellations that yield the best results. Databases of clinical PSG data have been analyzed to find the smallest working combination of sensors that allow solid sleep staging. Examples include evaluating sleep stages from a single electrode (17,18) as well as the use of machine learning approaches that use the smallest number necessary for a correct categorization from a large number of available parameters and thus provide information about possible reduced sensor constellations (19,20). Currently, we know of no miniaturized sleep monitoring system that is fully self-applicable for the use at home in the sense that all equipment would be easily accessible to a layperson. In this study, we focused on combining the signal quality of wet scalp electrodes with the usability of easily applied dry electrodes by applying wet flex-printed electrodes to the hairless skin around the ear. By doing so, we built on existing knowledge concerning ear-EEG and get one step closer toward self-administered home sleep EEG acquisition. We tested an approach to at-home sleep monitoring, applying novel, unobtrusive hardware and using statistical means to validate the data quality and sensor selection. We focused on easily obtainable sensor parts to build a sensor system that can easily be adapted or replicated. For the EEG, the cEEGrid was used (21,22). The cEEGrid is a flexible, discrete wet EEG system and is particularly suitable for measuring EEG in the home environment. It consists of a printed circuit board (PCB) including flat silver electrodes on a flexible polymer sheet in the shape of a C. Fitting around the hairless skin around the ear, it is self-adhesive and easy to apply, making it easy to use for measuring sleep at home. Compared with ear-canal-electrodes or single electrodes, the cEEGrid has the advantage of larger inter-electrode distances, which allows for the recording of larger amplitude signals (23). In total, the cEEGrid offers eight channels per ear that are referenced to the mastoid. The suitability for the use of the cEEGrid for measurements during sleep for up to 12 h has already been empirically proven (11,24). Both publications referred to the same dataset recorded data in a sleep laboratory and employed cEEGrids on two ears (24), found that sleep stages were difficult to differentiate based on cEEGrid channels compared to a full PSG, while differentiation between sleep and wake showed slightly higher agreement (25) found that the automatic scoring of cEEGrid data lead to a similar accuracy as the expert scoring of a standard PSG, which encourages the further exploration of cEEGrid sleep data. The higher accuracy based on automatic scoring compared to manual scoring of cEEGrid data was explained due to the non-standardized positions of cEEGrid electrodes and possible inexperience in manual scoring these kinds of data. A correlation index was derived between cEEGrid and scalp derivations within specific frequency bands (alpha, beta, theta and delta) resulting in good correlations using electrode averages and larger electrode distances. In the current study, we used a single cEEGrid, thereby reducing preparation time to very few minutes, which is one important feature when getting closer to a future selfapplication system. In the manual scoring of the data, linear combinations of cEEGrid channels were used and labeled accordingly to extrapolate to classical PSG channels. In addition, participants slept at home and started and finished the recording by themselves. Since home application makes the full PSG setup impractical, an approximation to relevant EEG and EOG (electrooculogram) channels of the PSG was used. Previous research showed that a single channel (Fpz) can be sufficient to differentiate REM and deep sleep stages (17). EOG channels give additional information on eye movements that facilitate sleep staging and offer essential information for sleep coders. Therefore, we measured full nights of sleep at home using one cEEGrid, one EEG channel (Fpz) and two EOG channels in a lightweight, mobile setup. Our aim was to identify sleep stages with this simple setup, enlisting a trained sleep scorer to manually code the EEG data. To determine the reliability of the sleep expert scorer, we calculated the test-retest reliability of hypnograms that were created at two different time points (2018, 2020) using the EEG of Fpz, EOG_L and EOG_R. The results represented the best possible score for further comparisons. We then created bipolar channels based on combinations of cEEGrid channels to approximate channels traditionally used for EEG in a PSG, therefore considering the spatial filter quality of EEG channels. We then defined the cEEGrid data as the first experimental dataset, and the cEEGrid + EOG data as a second experimental dataset. Both datasets were independently coded by the sleep expert scorer according to AASM criteria for sleep staging. Hypnograms were compared to the frontal channel + EOG hypnograms. Further, we tested how similar so-called graphoelements of the sleep EEG are represented using this setup compared to Fpz signal. We determined K-complexes, and sleep spindles, characteristic of stage two sleep as suitable graphoelements to include in the analysis. We tested which cEEGrid channel combinations would give the best representation of the annotated grapho-elements. Therefore, K-complexes and sleep spindles annotated by the expert scorer in the Fpz-EOG were correlated with every channel and all possible channel pair combinations of the cEEGrid-EEG. This correlation analysis explored whether combinations of ear EEG channels can be used to mimic the EEG measured at further distanced location of the scalp, like Fpz. If supported, this approach would motivate a selection of cEEGrid channel combinations which could approximately represent the measured EEG of classical PSG scalp electrodes like Fp2, F4, C4, P4 and O2. Participants Ten participants (8 females, 2 males, mean age = 28.4 ± 4.3 years) were recruited from members and friends of the Department of Psychology, Carl von Ossietzky University of Oldenburg, Germany. Recruiting among this group was necessary because participants were visited by the experimenter in their private house and it was preferred that she is familiar to the participants. Participants reported no sleep disorders. Each participant provided one night of sleep data, recorded during sleep at home. The study was approved by the local ethics board. Data Acquisition The experimenter visited the participants' house in the evening to prepare him or her for the night's recording. Participants were asked to wear their nightwear and no make-up. Participants gave written informed consent to participate in the study before preparation began. The experimental setting is shown in Figure 1. One cEEGrid (21) was prepared with abrasive electrolyte gel (ABRALYT HiCl, Easycap, Germany) and placed around the right ear with a self-adhesive sticker. In addition, a single sintered Ag/AgCl ring electrode was placed on the Fpz location. For the EOG signal, two Ag/AgCl electrodes were placed diagonally near the eyes in accordance with the AASM (American Academy of Sleep Medicine) manual. All ring electrodes were then filled with the electrolyte gel. The electrodes were connected to a SMARTING SLEEP amplifier (mBrainTrain, Serbia), which included a built-in Gyro sensor, and then attached to a chest strap. The amplifier connected via Bluetooth to a commercially available smartphone (Sony Xperia Z1) which was placed close to the bed of the participant. Impedances were checked on the SMARTING app und recording commenced when impedances were generally below 20 k . In order to secure cables for sleep, tubular bandages were applied each to the participant's head and to the connector bundling the cables. Overall, 9 EEG channels (right cEEGrid channels R1-8, Fpz), 2 EOG channels (EOG_L and EOG_R) and 3 Gyro channels were recorded with a sampling rate of 250 Hz for the duration of the night's sleep (reference and ground are located at the center of the cEEGrid and placed on the right mastoid portion, see Figure 1). Data Preprocessing The following preprocessing steps were done to prepare the recorded EEG data for the annotation of the sleep expert scorer and the correlation analysis. For the annotation, three channel layouts (Fpz+EOG, cEEGrid and cEEGrid+EOG) were used as listed in Table 1. Channels were selected and if needed rereferenced and relabeled. The data set Fpz+EOG is taken as reference with which the first experimental data set cEEGrid and the second experimental data set cEEGrid+EOG is compared with. For the cEEGrid layouts linear combinations of cEEGrid channels were used motivated to approximately represent the EEG measured at classic PSG-relevant scalp positions Fp2, F4, C4, P4 and O2 when referenced to the right mastoid process M2 (as shown in Figure 2). The channel combinations where labeled accordingly to PSG standards to assure their familiarity to the scorer (e.g., Fp2_M2, F4_M2, . . . ). In data set cEEGrid+EOG two EOG channels are added and re-referenced to R6 as classical mastoid process reference. After this step, EEG-data was bandpass filtered, using a phase true, 4th order Butterworth filter with a passband of 0.5 to 40 Hz, to reduce electrode drift and noise containing Data set Fpz+EOG cEEGrid cEEGrid+EOG high-frequency components. Following this step, data was down sampled to 125 Hz. Data Annotation of the Sleep Expert An expert polysomnographic technologist with 14 years of experience in the area of polysomnography (MP, subsequently referred to as expert scorer) annotated the EEG data in consecutive 30 s segments, using an open source, sleep analysis software (26). Sleep staging was done in respect to the AASM guidelines for sleep staging in four annotation conditions described in Table 2 [annotated stages: awake (W), N1, N2, N3 and REM (rapid eye movement)]. Note that the novelty of the setup meant that the technical and digital specifications deviated considerably from AASM guidelines. The sleep staging guidelines, including amplitude thresholds, were observed to the best of the scorer's abilities. In addition, grapho-elements (Kcomplexes and sleep spindles) were annotated in the condition Fpz+EOG 2 nd Rating. -R6, R2-R6, R3-R6, R4-R6, R5-R6, R6, R1-R7, R2-R7, R3-R7, R4-R7, R5-R7, R6-R7, R7, R1-R8, R2-R8, R3-R8, R4-R8, R5-R8, R6-R8, R7-R8, R8. To calculate the average correlation coefficient over epochs of one grapho-element or further average over participants, the single correlation coefficients were first Fisher Z transformed than averaged and in a final step back-transformed, by calculating its hyperbolic tangent. Statistical Analysis of Hypnograms To statistically evaluate the accordance of the different hypnograms, Cohen's Kappa was used to determine the respective inter-rater reliability in three test conditions as described in Table 3 [agreement scale according to (27)]. For a calculation of the reliability over participants, the corresponding hypnograms were concatenated. Therefore, confusion matrices of the different hypnograms were calculated and further used to determine the Cohen's Kappa and its standard error. Cohen's Kappa was calculated for every single sleep stage vs. the rest, and for all sleep stages. Hypnograms Hypnograms of all four annotation conditions are shown in Figure 3 for a representative participant. In Figures 3A,B Fpz+EOG 2 nd Rating based on data set Fpz+EOG of all ten participants. Two years of time in-between 1 st and 2 nd rating. In addition, grapho-elements (K-complexes and sleep spindles) were annotated. cEEGrid Rating based on data set cEEGrid of all ten participants. cEEGrid+EOG Rating based on data set cEEGrid+EOG of all ten participants. Figure 4 shows the results of the correlation analysis averaged over epochs for K-complex and sleep spindle events, respectively. The effect is similarly visible in the average over participants and single participant results. R1 and R1-R4 scored the highest positive (K-complex: 0.68, sleep spindle: 0.62) and R5-R8 scored the highest negative average correlation coefficients (K-complex: −0.59, sleep spindle: −0.52). It is noticeable that K-complexes and sleep spindles annotated in EEG recorded at Fpz are best represented in EEG of cEEGrid channel combinations that point in the direction of Fpz. This directional dependence is exemplarily shown in Figure 5 for six cEEGrid channel combinations. Overall, these results confirm the assumption that combinations of ear EEG channels can be used to estimate the EEG measured at further distanced location of the scalp and therefore the selection of channel combinations used for cEEGrid and cEEGrid+EOG, cf. Table 1. DISCUSSION The present study explored whether flex-printed ear-EEG sensors can be used to capture sleep stages from recordings performed with a wireless amplifier and off-the-shelf smartphone technology at home. The correlation analysis approach provides evidence that combinations of ear EEG channels can be used to extrapolate information measured at traditional locations on the scalp, like Fpz. Overall, the results support the selection of those cEEGrid channel combinations that were used for the cEEGrid and cEEGrid+EOG analysis. Here, cEEGrid channels were combined and re-named to mimic EEG signals that may be recorded from traditional PSG scalp electrodes Fp2, F4, C4, P4 and O2. The quality of the hypnograms in comparison to the Fpz+EOG hypnograms differed depending on the inclusion of additional EOG channels in the analysis. The hypnograms based on cEEGrid only showed significantly less agreement than the cEEGrid+EOG hypnograms. A comparison of Cohen's κ values of the columns of Table 4 shows that Fpz+EOG vs. (28)]. Therefore, while the reliability of scores appears low for some sleep stages, it is not lower than what is typical for manually scored hypnograms. We expect that automated scoring may outperform manual scorers and yield better results in the near future (29). The results of the correlation analysis further indicate that annotation of grapho-elements, like Kcomplexes and sleep spindles, should be possible in ear-EEG data. In the current study, we found that off-the-shelf smartphone technology, when combined with a wireless amplifier and a potentially easy-to-administer ear-EEG electrodes array, may be sufficient to provide hypnograms from home sleep data. Note however that ear-EEG performed better in combination with additional EOG electrodes than by itself, in particular when determining REM sleep. REM sleep can be challenging to differentiate from wake phases without EOG which provides essential information on the vertical and horizontal eye movements that signify REM sleep. Similarly, slow rolling eye movements clearly visible in the EOG channels may indicate the transition from wake to N1 sleep. Due to the importance of the EOG information for sleep scoring, it would be interesting for future studies to test if a scoring using EOG alone is possible (24) showed only moderate agreement for all sleep stages combined (κ = 0.42 ± 0.21) between hypnograms based on cEEGrid vs. hypnograms based on full PSG and attributed the absence of an EOG as a possible reason for the discrepancy. The current study is based on a manual scoring of the hypnograms. The use of channel combinations approximating the signal of classical PSG channels combined with suitable channel labels, instead of direct cEEGrid channels, could also be one reason for a better agreement, as supposed in (25). In the current study, κ scores for N1 sleep Of the relevant characteristics that expert sleep scorer require, eye movements and grapho-elements are of great importance. They will likely continue to play a prominent role in defining hypnograms as the field moves toward automatic sleep stage detection (31). Concerning eye movements, the cEEGrid proved a useful approximation to relevant scalp positions but it was not able to provide sufficient EOG information, at least when used on a single side only. To improve on the design, extending the electrode array to include near-eye positions, or possibly emulating EOG by using two cEEGrids (one on each ear) and cross-referencing between the two may be advantageous (22). A recent adaptation of the cEEGrid to flex-printed forehead EEG delivered high-quality EEG signals of forehead and facial positions with minimal discomfort or inconvenience over the course of 8 h (32). Encompassing standard EOG positions in the grid, this electrode array may provide a signal suitable for detection of REM, though it has not been tested for wear during sleep. As noted previously (21), it is only available in one size, which can make fitting difficult, especially in the elderly who tend to have larger ears. The cEEGrid is not flexible and adjustable enough to be worn comfortably by everyone. Ideally, ear-EEG electrode grids should be manufactured from a more flexible material and be available in different sizes to allow easier self-application. Concerning grapho-elements, we found that sleep spindles and K-complexes are best represented in cEEGrid channel combinations that point in the direction of Fpz in a data-driven approach. In a future study, we plan to directly compare ear-EEG channel combinations with EEG measured at several scalp positions to further validate ear-EEG solutions for sleep EEG acquisition. Recently, suggestions have been made for the source of sleep spindle and K-complex generators (33,34) that may facilitate an ideal layout of bipolar channels on the hairless skin to best capture sleep characteristics. In the future, a generatordriven approach may be helpful to place bipolar channels in a way that ideally capture the source of sleep stage characteristics like sleep spindles and K-complexes. Concerning ear-EEG, the source-sensor relationship has recently been evaluated by simulations to compare cEEGrid ear-EEG with 128-channel cap-EEG (23). Using the same forward modeling approach, a new arrangement of electrodes, i.e., oriented toward a K-complex generator, may be compared to full scalp EEG to provide an estimate of sensitivity to the regions in question. This approach may help in finding the best trade-off between comfortable, unobtrusive sensors and data quality. Further miniaturization and optimization of wireless EEG systems is an additional point to be addressed in future. Current mobile EEG systems already include movement sensors. Depending on the placement of the amplifier on the body during night, movement sensors could be Colors represent the magnitude of correlations. Based on the correlation analysis cEEGrid channel combination that point in the direction of Fpz result in highest absolute correlation and therefore are best suited to represent grapho-elements (K-complexes and sleep spindles) annotated in the EEG recorded at Fpz. Combination R5-R8 pointing in frontal direction due to a negative correlation value. used to get additional information, such as body position and breathing patterns. The AASM criteria form a helpful standard that has been refined over decades. However, as the idea of sleep monitoring moves from hand-coding by trained personal to automatic decoding with machine learning methods, inflexible procedures of the PSG may constrain the development of new standards for identifying sleep stage characteristics. Disruptive technologies may be needed to help identify normal and abnormal events during sleep in ecologically valid settings. Ear-EEG acquisition seems suitable for the development of comfortable, discrete and robust sleep EEG system that work at home, can be selfadministered and are unobtrusive during wear. DATA AVAILABILITY STATEMENT The datasets presented in this article are not readily available because sharing of raw data was not included in the ethics statement. Requests to access the datasets should be directed to Carlos F. da Silva Souto<EMAIL_ADDRESS> ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Research Impact Assessment and Ethics Committee at the Carl von Ossietzky University of Oldenburg. The participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS IM and SD designed the experiment. IM collected the data. IM and MB analyzed all 2018 data. MP scored all sleep data and coded grapho-element annotations. CS analyzed all 2020 data devised executed the correlations, and grapho-element analysis. WP, CS, and KIW wrote the manuscript. SD, IM, MB, and MP offered feedback during the writing process. All authors contributed to the article and approved the submitted version. FUNDING The Oldenburg Branch for Hearing, Speech and Audio Technology HSA is funded in the program ≫Vorab≪ by the Lower Saxony Ministry of Science and Culture (MWK) and the Volkswagen Foundation for its further development. Part of this project was funded by BMBF NeuroCommTrainer (16SV7790). Part of the research was conducted as a project in partial fulfillment of the requirements for a master's degree by ID.
6,192.2
2021-06-30T00:00:00.000
[ "Computer Science" ]
Rapid Preparation of a Large Sulfated Metabolite Library for Structure Validation in Human Samples Metabolomics analysis of biological samples is widely applied in medical and natural sciences. Assigning the correct chemical structure in the metabolite identification process is required to draw the correct biological conclusions and still remains a major challenge in this research field. Several metabolite tandem mass spectrometry (MS/MS) fragmentation spectra libraries have been developed that are either based on computational methods or authentic libraries. These libraries are limited due to the high number of structurally diverse metabolites, low commercial availability of these compounds, and the increasing number of newly discovered metabolites. Phase II modification of xenobiotics is a compound class that is underrepresented in these databases despite their importance in diet, drug, or microbiome metabolism. The O-sulfated metabolites have been described as a signature for the co-metabolism of bacteria and their human host. Herein, we have developed a straightforward chemical synthesis method for rapid preparation of sulfated metabolite standards to obtain mass spectrometric fragmentation pattern and retention time information. We report the preparation of 38 O-sulfated alcohols and phenols for the determination of their MS/MS fragmentation pattern and chromatographic properties. Many of these metabolites are regioisomers that cannot be distinguished solely by their fragmentation pattern. We demonstrate that the versatility of this method is comparable to standard chemical synthesis. This comprehensive metabolite library can be applied for co-injection experiments to validate metabolites in different human sample types to explore microbiota-host co-metabolism, xenobiotic, and diet metabolism. Introduction Metabolomics is the most recent major "omics" research field and is in an ongoing process for optimization of data quality, sample preparation, and the development of bioinformatic tools [1][2][3]. Metabolites in any biological sample are of high structural complexity, different polarity, and have concentration differences of several orders of magnitude, which is in stark contrast to the other three major omics fields (genomics, transcriptomics, and proteomics) with defined sets of natural building blocks. Extracted metabolite mixtures are commonly analyzed by the following two standard detection methods: (i) nuclear magnetic resonance spectroscopy and (ii) chromatographic separation systems (e.g., gas or liquid phase) coupled to mass spectrometry [4,5]. Mass spectrometry has evolved as the dominant method in metabolomics and identification of a metabolite requires chromatographic information and the mass-to-charge ratio (m/z) for determination of the correct chemical structure. The structure validation process is currently considered to be one of the major bottlenecks in metabolomics analysis. Metabolites 2020, 10 The development of robust authentic metabolite databases for rapid and automated metabolite structure validation is not a trivial task due to the structural complexity and large number of metabolites. Powerful and sophisticated metabolite databases have been developed such as GNPS, HMDB, MoNa, SIRIUS, NIST, and METLIN [6][7][8][9][10][11][12]. These constantly increasing experimental and computational standard libraries contain large sets of metabolite fragmentation pattern and serve as important tools for the characterization of metabolites in any sample type. However, new tools and advanced analyses lead to the identification of yet unidentified and uncharacterized metabolites that are not present in these databases and not directly available for the scientific community [13,14]. The structure of validated metabolites is reported at different levels of confidence and the highest level requires authentic chemical standards [15]. These standards are required to distinguish regioisomers that are metabolites with the same chemical formula and m/z value, which can in most cases not be distinguished based on their fragmentation pattern only [15,16]. For example, hydroxybenzoic acid can have three different substitution pattern with the phenolic alcohol either in ortho, meta or para position to the carboxylic acid. These three regioisomers have different chemical and physical properties and are products of different biochemical pathways [17,18]. Therefore, the correct stereochemical information for the identified compound is required to draw the correct biochemical conclusions in metabolomics studies. An authentic standard is required to validate the metabolite structure and determine the specific characteristics such as the retention time, the m/z value, and the MS/MS fragmentation pattern. These synthetic standards are utilized in co-injection experiments for direct comparison with the natural metabolite. We have recently developed new chemical biology tools for the discovery of unknown metabolites in human samples [19][20][21][22]. A metabolite class, of interest, is O-sulfated metabolites that have been widely linked with the co-metabolism between the human and its gut microbiome [19,20,23,24]. Sulfated compounds are commonly known to be part of the human phase II clearance process for a large percentage of xenobiotic compounds for excretion through urine or feces [25,26]. Indoxyl sulfate or p-cresyl sulfate are two examples for the metabolic interaction of mammalian and bacterial metabolism [19]. Despite the importance of this compound class, the number of these metabolite conjugates reported in the largest metabolite databases such as HMDB or METLIN is limited, and thus widely excluded in metabolomics studies. For investigation of this metabolite class, we recently developed a method combining enzymatic metabolite conversion with state-of-the-art metabolomics bioinformatic analysis [19,27]. Several previously unreported sulfated metabolites were discovered in human urine and fecal samples. Metabolite structures in these studies were validated at different confidence levels. Most assignments were performed by comparison of their mass spectrometric fragmentation pattern with databases, as commercially available sulfated compounds are scarce. Due to the lack of available reference standards, we sought to prepare a comprehensive library of biologically relevant sulfated metabolites. As chemical synthesis of these compounds usually requires fully equipped organic chemistry infrastructure that is not available in many metabolomics laboratories, we have described a simple and straightforward method to quickly prepare a series of sulfated standards. This method can be used to efficiently produce a large number of reference compounds through parallel syntheses at the same quality as standard organic chemistry. We outline the preparation of a chemical library of 38 sulfated metabolites that are now available for validation in human samples. Results and Discussion In order to efficiently synthesize standard molecules for structure validation of sulfated metabolites for large-scale preparation of reference molecules, we sought to devise a straightforward synthetic strategy ( Figure 1). Standard chemical synthesis protocols of metabolites usually require several chemical reactions including protecting group chemistry to avoid byproduct formation for most synthetic routes [28,29]. This is followed by preparative high-performance liquid chromatography (HPLC) purification and NMR characterization to confirm the chemical structure of the synthetic product (Method A) [19,30,31]. The purification of the crude product from a chemical reaction is time consuming and also requires specific equipment. In our developed procedure, reported herein (Method B), we are merely removing all reagents and the solvent after completed synthesis using a lyophilizer. Then, this product is reconstituted and analyzed via ultra-performance liquid chromatography coupled to mass spectrometry (UPLC-MS). This method can be used for large-scale and parallel preparation of new sulfated compounds. Herein, we detail a step-by-step evaluation of this method and provide validation of the quality for this synthetic procedure. The prepared standard compound library is available for rapid and efficient metabolite structure characterization in human samples. An additional advantage is the preparation scale at low quantities (0.2 mg) as the high sensitivity of the mass spectrometric analysis does not require large scale synthesis. Metabolites 2020, 10, x FOR PEER REVIEW 3 of 15 chromatography (HPLC) purification and NMR characterization to confirm the chemical structure of the synthetic product (Method A) [19,30,31]. The purification of the crude product from a chemical reaction is time consuming and also requires specific equipment. In our developed procedure, reported herein (Method B), we are merely removing all reagents and the solvent after completed synthesis using a lyophilizer. Then, this product is reconstituted and analyzed via ultra-performance liquid chromatography coupled to mass spectrometry (UPLC-MS). This method can be used for large-scale and parallel preparation of new sulfated compounds. Herein, we detail a step-by-step evaluation of this method and provide validation of the quality for this synthetic procedure. The prepared standard compound library is available for rapid and efficient metabolite structure characterization in human samples. An additional advantage is the preparation scale at low quantities (0.2 mg) as the high sensitivity of the mass spectrometric analysis does not require large scale synthesis. Validation of Standard Preparation Procedure In the first step, we validated our standard preparation strategy by comparing our method with metabolites prepared through standard chemical synthesis. The two regioisomers, 3-methoxyphenol sulfate (1) and 4-methoxyphenol sulfate (2), were first synthesized using the standard method (Method A). The two regioisomers are baseline separated and their MS/MS fragmentation result in almost identical fragmentation pattern (Figure 2A,B). On the basis of this similarity, these two isomers are a good example of metabolites that can only be distinguished in biological samples using authentic reference molecules. Then, each regioisomer 1 and 2 was prepared using Method B by mixing the sulfate reagent NMe3·SO3 (3 eq.), the corresponding alcohol or phenol (1 eq.), sodium hydroxide (1 eq.), and sodium bicarbonate (3 eq.). The solution was stirred for 16 h under inert gas conditions. The solvent of the reaction mixture was removed and re-dissolved to analyze the reaction using UPLC-MS analysis. Reference molecules prepared by either Method A or Method B have the same chromatographic properties and mass spectrometric fragmentation spectra (Figure 2A,B). This validates the applicability of Method B for construction of a sulfated metabolite library. Validation of Standard Preparation Procedure In the first step, we validated our standard preparation strategy by comparing our method with metabolites prepared through standard chemical synthesis. The two regioisomers, 3-methoxyphenol sulfate (1) and 4-methoxyphenol sulfate (2), were first synthesized using the standard method (Method A). The two regioisomers are baseline separated and their MS/MS fragmentation result in almost identical fragmentation pattern (Figure 2A,B). On the basis of this similarity, these two isomers are a good example of metabolites that can only be distinguished in biological samples using authentic reference molecules. Then, each regioisomer 1 and 2 was prepared using Method B by mixing the sulfate reagent NMe 3 ·SO 3 (3 eq.), the corresponding alcohol or phenol (1 eq.), sodium hydroxide (1 eq.), and sodium bicarbonate (3 eq.). The solution was stirred for 16 h under inert gas conditions. The solvent of the reaction mixture was removed and re-dissolved to analyze the reaction using UPLC-MS analysis. Reference molecules prepared by either Method A or Method B have the same chromatographic properties and mass spectrometric fragmentation spectra (Figure 2A,B). This validates the applicability of Method B for construction of a sulfated metabolite library. We next validated the regioselectivity of this reaction for Method B. While the method is easily applicable for sulfation of monohydroxylated compounds, many metabolites contain additional functionalities such as amines or carboxylic acids, which could lead to byproduct formation including bis-sulfation. In order to validate our method for these metabolites, we tested two different substrates. In the first reactivity analysis, we tested the sulfation reactivity and stability of carboxylic acids, which form labile sulfated compounds due to fast hydrolysis in acidic and neutral aqueous solutions. Phenylbutyric acid was chosen as a model substrate, which only contains one carboxylic acid as the reactive site. No consumption of the starting material was observed, and no formation of a sulfated product was detected ( Figure 2C). On the basis of this observation, we can exclude sulfation of carboxylic acid functionalities. Furthermore, no carboxylic acid sulfation was observed for any substrate using Method B in the sulfate library construction. The second reactivity analysis was performed to test sulfation of primary amines. Serotonin was tested as an example substrate that contains both a primary amine and a 3-hydroxyindole functionality. Upon testing Method B for this compound, we obtained two different monosulfated products identified in the mass spectrometric analysis ( Figure 2D). These two products were identified as 3-sulfohydroxyserotonin and N-sulfo-serotonin. No bis-sulfated product was identified in the reaction mixture. Mass spectrometric fragmentation experiments were performed to distinguish each structure based on their mass spectrometric fingerprint ( Figure 2D). Each metabolite was identified through specific MS-fragments for O-sulfated serotonin (m/z = 79.9582, the loss of SO 3 ) and the specific fragmentation of N-sulfated serotonin (m/z = 95.9749, loss of NSO 3 ) [32]. Structurally similar metabolites that also contain primary amines can easily be distinguished by analysis of these specific mass spectrometric fragments to identify the correct sulfation site. We next validated the regioselectivity of this reaction for Method B. While the method is easily applicable for sulfation of monohydroxylated compounds, many metabolites contain additional functionalities such as amines or carboxylic acids, which could lead to byproduct formation including bis-sulfation. In order to validate our method for these metabolites, we tested two different substrates. In the first reactivity analysis, we tested the sulfation reactivity and stability of carboxylic acids, which form labile sulfated compounds due to fast hydrolysis in acidic and neutral aqueous solutions. Phenylbutyric acid was chosen as a model substrate, which only contains one carboxylic acid as the reactive site. No consumption of the starting material was observed, and no formation of a sulfated product was detected ( Figure 2C). On the basis of this observation, we can exclude sulfation of carboxylic acid functionalities. Furthermore, no carboxylic acid sulfation was observed for any substrate using Method B in the sulfate library construction. The second reactivity analysis was performed to test sulfation of primary amines. Serotonin was tested as an example substrate that contains both a primary amine and a 3-hydroxyindole functionality. Upon testing Method B for this Construction of the Sulfate Library Our developed and validated procedure for preparation of sulfated metabolites was, then, applied for a large set of biologically relevant phenols. We synthesized a series of sulfated compounds of structurally diverse hydroxylated and phenolic compounds to continue building up our already started in-house compound library of 24 sulfated metabolites, which were chemically synthesized in previous studies [19,33]. The selected new substrates were either based on our previous studies, for which we proposed metabolite structures based on MS fragmentation spectra as compared with databases or metabolites that were part of a fecal metabolite library purchased from MetaSci. Using Method B, we prepared 38 new O-sulfated metabolites that also contained other functionalities such as carboxylic acids, as well as primary and secondary (Table 1). We focused on metabolites with a single reactive alcohol or phenol. Furthermore, each compound was fragmented at two different voltages (10 eV and 30 eV) to obtain comprehensive mass spectrometric fragmentation spectra, as commonly reported in metabolite databases. This structural information for all prepared sulfated compounds is now available for the scientific community. In addition, we are uploading this information to MS/MS fragmentation spectra databases. We have included the top five fragmentation peaks with the highest intensity from this obtained MS/MS spectra in Table 1, which builds the basis for straightforward compound validation in future studies. To the best of our knowledge, this is the largest chemically synthesized collection of sulfated metabolites that has been reported yet. An example for co-injection experiments of a synthesized metabolite using Method B with the natural compound in urine samples is illustrated for sulfated syringic acid (36) in Figure 3. We selected 36 as a representative and realistic example, as this urine sample also contains an additional regioisomer with the same m/z values but with a different retention time ( Figure 3A). The extracted ion chromatogram (EIC) traces in this experiment demonstrate that the synthesized compound and the natural metabolite eluting at 6.99 min are identical due to the same retention time and fragmentation pattern. This realistic example demonstrates the efficient structure validation utilizing our method to distinguish two regioisomers. The chemical structure for the regioisomer at 7.58 min was not determined. The mass spectrometric fragmentation spectra for both applied voltages of 36 fit perfectly to the predicted fragments ( Figure 3B,C). Separation of Regioisomers Another advantage of our method is the ability to efficiently validate and distinguish structural regioisomers. As described above, metabolites with the same chemical formula can generally not be distinguished only based on their MS/MS fragmentation pattern. Chromatographic properties and retention time are difficult to predict for LC-MS-based analysis of similar compounds as a retention index (RI) can currently only be used for gas chromatography coupled to mass spectrometry (GC-MS) analysis [34]. Sulfated phenolic aromatic compounds with the same chemical formula commonly have different substitution pattern such as ferulic acid sulfate and isoferulic acid sulfate which can be present in all human sample types. Despite minor structural differences, these compounds have Table 1. Overview of the standard library including 38 metabolites. Description of the chemical formula, m/z ratio (in negative mode of ionization), detected retention time, and the top 5 fragments (bigger than 4%) for two different voltages (10 eV and 30 eV). All numbered chemical structures are depicted in Supplementary Materials, Figure S1. Separation of Regioisomers Another advantage of our method is the ability to efficiently validate and distinguish structural regioisomers. As described above, metabolites with the same chemical formula can generally not be distinguished only based on their MS/MS fragmentation pattern. Chromatographic properties and retention time are difficult to predict for LC-MS-based analysis of similar compounds as a retention index (RI) can currently only be used for gas chromatography coupled to mass spectrometry (GC-MS) analysis [34]. Sulfated phenolic aromatic compounds with the same chemical formula commonly have different substitution pattern such as ferulic acid sulfate and isoferulic acid sulfate which can be present in all human sample types. Despite minor structural differences, these compounds have different bioactive properties, as only ferulic acid sulfate has been described to reduce blood pressure in mice [35]. Depicted in Figure 4 are four different examples of regioisomers that can only be distinguished through their different chromatographic properties. Interestingly, these similar structures can either result in baseline separated or closely eluting and overlapping peaks. Regioisomers 2-hydroxypyridine sulfate (4) and 3-hydroxypyridine sulfate (5) have a retention time difference of over 2.5 min and can clearly be separated ( Figure 4A). This is uncharacteristic as compared with other structural isomers. For example, 3-hydroxyhippuric acid sulfate (34) and 4-hydroxyhippuric acid sulfate (35) have similar retention times and co-elute ( Figure 4B). Single UPLC-MS analyses of each prepared standard demonstrate separation of both peaks. Co-injection experiments at equimolar concentrations can be used to distinguish these metabolites in complex biological matrices. The three metabolites, 2-hydroxybenzoic acid sulfate (13), 3-hydroxybenzoic acid sulfate (14), and 4-hydroxybenzoic acid sulfate (15), also have similar retention times but each structure can clearly be assigned using synthetic reference standards ( Figure 4C). A similar example demonstrating the versatility of our method is the facile identification of 2-methoxyphenol sulfate (8), 3-methoxyphenol sulfate (1), and 4-methoxyphenol sulfate (2) using authentic standards ( Figure 4D). Metabolite Library and Significance Upon the validation of our standard preparation procedure, we have compiled a comprehensive in-house library of 62 authentic sulfated compounds including these 38 additional sulfated Metabolite Library and Significance Upon the validation of our standard preparation procedure, we have compiled a comprehensive in-house library of 62 authentic sulfated compounds including these 38 additional sulfated metabolites prepared in this study (Supplementary Materials, Table S1). The library contains biochemically relevant compounds including metabolites from human-microbiota co-metabolism or dietary compounds produced by the microbiome. Several of these new metabolites were prepared from a fecal metabolite library of 540 metabolites purchased from MetaSci that included phenolic and bioactive compounds. The colon is the part of the human body with the largest population of bacteria, and therefore metabolite analysis of fecal samples provides the best possibility to study gut microbiota metabolism. Analysis of metabolites in fecal samples will provide more detailed insights in the metabolic interaction of the human host and its gut bacteria. For example, 3-hydroxyhippuric acid and 4-hydroxyhippuric acid have been widely associated with this co-metabolism as bacteria produce the precursor hippuric acid from food compounds [36]. We have recently identified the presence of their sulfated analogues for the first time and they are now available for inclusion in future metabolomics and microbiome analyses [27]. Other metabolites described as part of the benzoic acid biotransformation are three analogues of hydroxybenzoic acid sulfate (13,14, and 15) as well as benzyl alcohol sulfate (6) [18]. Furthermore, hippuric acid has been described as a marker for uremic diseases, as it is one of the end products of the detoxification of toluene in the human body [37]. Identification and analysis of additional phase I or phase II analogues can provide further insights on the potential of hippuric acid and its analogues as urinary biomarkers for diseases and may lead to identification of unknown bioactive metabolites. Hydroxypyridines are another compound class that has been associated with bacterial metabolism. Kaiser et al. described specific bacterial reactions for the biosynthesis of 2-hydroxypyridine, 3-hydroxypyridine, and 4-hydroxypyridine [38]. We have previously identified 3-hydroxypyridine sulfate (5) in human samples. Raspberry ketone is a compound produced by yeast and its sulfated analogue (28) has not yet been identified in human samples [39]. Having the synthetic standard simplifies the validation of this sulfated metabolite in human samples. Raspberry ketone can also be produced through the phenylalanine degradation pathway [40]. We have also synthesized several compounds that are sulfated analogues of products from bacterial degradation of anthocyanins (vanillin, sinapic acid, or syringic acid) [41,42]. Additionally, N-methyltyramine is part of the catecholamine metabolism in the human brain, and its sulfated analogue (18) has only been reported in human samples once before [43]. Tyramine is a metabolite of great interest as it is produced by bacteria through decarboxylation of tyrosine and one of its analogues has been identified as a biomarker for the parasite onchocerciasis [30]. 3-Hydroxyphenylacetic acid is a product of the bacterial conversion of m-tyramine [44]. Mandelic acid is part of the neurotransmitter metabolism and is produced by the degradation of adrenaline and noradrenaline [45]. The sulfated analogue (20) can now be identified in human samples. The two synthesized coumarins, umbelliferone and 4-hydroxycoumarin, are part of the p-coumaric acid metabolic pathway [46]. Hypoxanthine is a metabolite that is part of the purine degradation pathway in humans, whereas 4-cyanophenol sulfate (7) has been widely described as a pesticide-derived metabolite after detoxification metabolism in plants [47]. The comprehensive analysis of these sulfated metabolites of biological importance in both human and bacterial metabolism requires sulfated standard metabolites to determine the correct metabolic pathway and origin. We have also included compounds that have not yet been detected in human samples but are potential metabolic downstream products. Availability of their standards provides the opportunity to (i) determine their absolute chemical structure and (ii) for precise quantification of these metabolites in human samples. Materials and Equipment All reagents and solvents were purchased from Sigma-Aldrich or Fischer Scientific and were used without further purification. Most phenolic or hydroxylated compounds used for Method B were part of a metabolite library purchased from MetaSci (Toronto, ON, Canada). HPLC grade solvents were used for HPLC purification and LC-MS grade for UPLC-MS analysis. Solutions were concentrated in vacuo either on a Speedvac Concentrator Plus System (Eppendorf, Hamburg, Germany) or using the Freezone Benchtop 70040 lyophilizer, 4.5 L (Labconco, Kansas City, MO, USA). Chromatographic purification of products was accomplished using preparative reverse phase HPLC on an Agilent HPLC-1100 series system(Agilent, Santa Clara, CA, United States) equipped with a Waters Atlantis T3 preparative column (5 µm, 10 × 100 mm) at a 2.5 mL/min flow rate. All synthesized compounds using Method A were ≥95% pure as determined by NMR. NMR spectra were recorded on an Agilent 400 MHz spectrometer (1H-NMR: 399.97 MHz and 13C NMR: 100.58 MHz). Chemical shifts are reported in parts per million (ppm) on the δ scale from an internal standard. Multiplicities are abbreviated as follows: s = singlet, d = doublet, t = triplet, q = quartet, and m = multiplet. High-resolution mass spectra were acquired on a Waters SYNAPT G2-S High Definition Mass Spectrometry (HDMS) (Waters, Milford, MA, USA) using an electrospray ionization (ESI) source with a Waters AQCUITY UPLC I class system(Waters, Milford, MA, USA) and equipped with a Waters Acquity UPLC ® HSS T3 column (1.8 µm, 100 × 2.1 mm, (Waters, Milford, MA, USA)). Human Samples Healthy donor urine samples were obtained in accordance with the World Medical Association Declaration of Helsinki and all healthy donors gave written informed consent (Ethical approval number: Dnr 2017/290-31). All samples were stored at −80 • C. Chemical Synthesis: Method A The general procedure is as follows: To a solution of the corresponding hydroxyl compound and 3.0 eq NaOH, 4.0 eq NaHCO 3 and 2.5 eq SO 3 ·NMe 3 complex were added, as illustrated in Scheme S1. The reaction mixture was stirred at room temperature for 24 h and concentrated using a lyophilizer. The dry crude mixture was purified by preparative HPLC to yield the desired product (details are described in Supplementary Materials, Scheme S1). Preparation of Sulfates: Method B Sulfates were prepared by mixing at least 0.2 mg of a hydroxyl molecule with 1.0 eq of NaOH, 3 eq of NaHCO 3 , and 3 eq of SO 3 ·NMe 3 . The reaction was stirred for 16 h in an inert gas atmosphere. The solvent of the reaction mixture was removed using a lyophilizer and the remaining solid was re-dissolved in a 5% acetonitrile solution in water and analyzed using UPLC-MS. This method was used for all compounds included in Table 1. UPLC-MS/MS Analysis Mass spectrometric analysis was performed on an Acquity UPLC system connected to a Synapt G2 Q-TOF mass spectrometer, both from Waters Corporation (Milford, MA, USA). Data acquisition and analysis was performed using the MassLynx software package The samples were introduced into the q-TOF using negative electrospray ionization. The capillary voltage was set to −2.50 kV and the cone voltage was 40 V. The source temperature was 100 • C, the cone gas flow was 50 L/min, and the desolvation gas flow was 600 L/h. The instrument was operated in MSE mode, the scan range was m/z = 50-1200, and the scan time was 0.3 s. In low energy mode, the collision energy was 10 eV and in high energy mode the collision energy was ramped between 25 and 45 eV. A solution of sodium formate (0.5 mM in 2-propanol/water, 90:10, v/v) was used to calibrate the instrument and a solution of leucine-encephalin (2 ng/µL in acetonitrile/0.1% formic acid in water, 50:50, v/v) was used for the lock mass correction, at an injection rate of 30 s. For the MS/MS fragmentation analysis, the 10 eV used was a combination of 5 eV on the trap and 5 eV on the transfer in the collision cell, and the 30 eV used was a combination of 10 eV on the trap and 20 eV on the transfer in the collision cell. Conclusions In this study, we present the chemical synthesis of a large library of sulfated metabolites relevant for mass spectrometric structure validation in human samples. By developing a straightforward and high-throughput synthetic procedure, we have synthesized and characterized 38 new sulfated compounds of diverse metabolic scaffolds. This metabolite library can easily be extended by any commercially available or synthetic monohydroxylated and monophenolic compound using Method B. As this compound class has been identified as a signature for microbiota-host co-metabolism, this unique metabolite library can serve to elucidate unknown metabolic interactions. Uncovering new metabolic interaction and further insights into this interspecies metabolism has high potential for identifying unknown bioactive compounds to understand human physiology and disease development linked to microbiota dysbiosis. This sulfated metabolite library is now available for the scientific community for inclusion in standard metabolomics studies.
6,372.8
2020-10-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Zeros and roots of unity in character tables For any finite group $G$, Thompson proved that, for each $\chi\in {\rm Irr}(G)$, $\chi(g)$ is a root of unity or zero for more than a third of the elements $g\in G$, and Gallagher proved that, for each larger than average class $g^G$, $\chi(g)$ is a root of unity or zero for more than a third of the irreducible characters $\chi\in {\rm Irr}(G)$. We show that in many cases"more than a third"can be replaced by"more than half". The author suspects that the answers to these questions are both 1/2.In particular, we propose the following: Conjecture 1. θ(G) and θ ′ (G) are ≥ 1/2 for every finite group G. We establish the conjecture for all finite nilpotent groups by establishing a much stronger result about zeros for this family of groups, which includes all p-groups.The number of p-groups of order p n was shown by G. Higman [6] and C. C. Sims [12] to equal p 2 27 n 3 +O(n 8/3 ) with n → ∞, and it is a folklore conjecture that almost all finite groups are nilpotent in the sense that the number of nilpotent groups of order at most n the number of groups of order at most n which, in view of our result, would mean that Conjecture 1 holds for almost all finite groups.Conjecture 1 is readily verified for rational groups, such as Weyl groups, and all groups of order < 2 9 , and although θ(G) = 1/2 for certain dihedral groups, the second inequality is strict in all known cases.The author suspects that both inequalities are strict for all finite simple groups: Conjecture 2. θ(G) and θ ′ (G) are > 1/2 for every finite simple group G. Nilpotent groups We begin with our results on finite nilpotent groups. Theorem 2. Let G be a finite nilpotent group, and let g ∈ G. for at least half of the nonlinear χ ∈ Irr(G). The key ingredient in the proof of Theorems 1 and 2 is Theorem 8, which will replace the result of Siegel used by Thompson and Gallagher.Its proof relies on some auxiliary results of independent interest and is based on arithmetic in cyclotomic fields. For each positive integer k, we denote by ζ k a primitive k-th root of unity.For any algebraic integer α contained in some cyclotomic field, we denote by l(α) the least integer l such that α is a sum of l roots of unity, by f(α) the least positive integer k such that α ∈ Q(ζ k ), and by m(α) the normalized trace Proof of Lemma 4. If n = 0, then there is nothing to prove, so assume n ≥ 1.Let ζ be a primitive p n -th root of unity.For each α j and β k , let r j and s k be nonnegative integers such that α j = ζ r j and β k = ζ s k .Put Then P (ζ) = 0, so P (x) is divisible in Z[x] by the cyclotomic polynomial Φ p n (x) = Φ p (x p n−1 ). Proposition 5. Let G be a finite group, let χ ∈ Irr(G), and let g be an element of G with order a power of a prime p.If p = 2 or χ(1) ≡ ±2 (mod p), then either χ(g) = 0, χ(g) is a root of unity, or m(χ(g)) ≥ 2. Proof of Proposition 5. Suppose that p = 2 or χ(1) ≡ ±2 (mod p).Let p n be the order of g, and let ζ be a primitive p n -th root of unity, so (1) We will show that either α = 0, α is a root of unity, or m(α) ≥ 2. ), then P divides p n .If P = 1, then α is rational and the conclusion follows.If P is divisible by p 2 , then for γ a primitive P -th root of unity, α is uniquely of the shape the α k are algebraic integers, and a straightforward calculation [2, p. 115] shows that m(α) is at least the number of nonzero α k .By (1), at least two of the α k are nonzero.Hence m(α) ≥ 2 if p 2 | P .It remains to consider the case Lemma 3].So assume l(α) = 2. Then by [9, Thm.1(i)], α can be written in the shape By Lemma 4, By ( 3) and the fact that χ(1) ≡ ±2 (mod p), Hence, for some root of unity ρ and primitive p-th root of unity ξ, α = (ξ − 1)ρ. Hence Lemma 6.Let G be a finite group, let χ ∈ Irr(G), and let g be an element of G with order a power of a prime p.If χ(1) ≡ ±1 (mod p), then χ(g) is not a root of unity. Proof of Lemma 6.Let p n be the order of g, so χ(g) ∈ Q(ζ p n ), and suppose that χ(g) is a root of unity.Since the roots of unity in a given cyclotomic field Q(ζ k ) are the l-th roots of unity for l the least common multiple of 2 and k, we then have for some ǫ ∈ {1, −1} and p n -th root of unity ξ.So by Lemma 4, either χ(1) ≡ 1 (mod p) or χ(1) ≡ −1 (mod p). Lemma 7. Let G be a finite group of prime-power order, let g ∈ G, and let Proof of Lemma 7. If |G| = p n with p prime, then each g ∈ G has order a power of p, and each χ ∈ Irr(G) has degree a power of p.So if χ(1) > 1, then by Proposition 5 and Lemma 6, for each g ∈ G, χ(g) = 0 or m(χ(g)) ≥ 2. For any character χ of a finite group, let Theorem 8. Let G be a finite nilpotent group, let χ ∈ Irr(G), and let g ∈ G. Then Proof of Theorem 8. Since G is nilpotent, it is the direct product of its nontrivial Sylow subgroups P 1 , P 2 , . . ., P n .Let g 1 , g 2 , . . ., g n be the unique sequence with g k ∈ P k and g = g 1 g 2 . . .g n . Proof of Theorem 1.By Proposition 9. Proof of Theorem 2. Taking the relation applying the elements σ of the Galois group G = Gal(Q(ζ |G| )/Q), and averaging over G, we have So for L = {χ ∈ Irr(G) : By Theorem 8, for each χ ∈ N , From ( 16) and ( 17 Proof of Corollary 3. By Theorem 1 and Theorem 2. Simple groups We now establish Conjecture 2 for several families of simple groups. Maintaining the notation of Suzuki [13], there are elements σ, ρ, ξ 0 , ξ 1 , ξ 2 such that each element of G can be conjugated into exactly one of the sets where A i = ξ i (i = 1, 2, 3), and the irreducible characters of G are given by the following table [13,Theorem 13]: and ǫ j 1 and ǫ k 2 are certain characters on A 1 and A 2 .The A i 's satisfy and denoting by G i (i = 0, 1, 2) the set of elements g ∈ G that can be conjugated into A i − {1}, we have where l 0 = 2 and √ −1/(q−1) and s ∈ Z. Then and γ s = 0 ⇔ 4s ± (q − 1) ≡ 0 (mod 2(q − 1)). By ( 29) and ( 26)-( 27), Equality must hold in (31) because By (30) and the fact that, for any Equality must hold in (32) because 1 G , σ G , and ρ G have size < |G|/|Cl(G)|, and for any g Verification of III.Let q = p n with p prime, G = L 2 (q), let R and S be as in [7, pp. 402-403], and let G 0 (resp.G 1 ) be the set of nonidentity elements g ∈ G that can be conjugated into R (resp.S ). The irreducible characters of G are given by Ward [14] in a 16-by-16 table, with the last 6 rows being occupied by 6 families of exceptional characters, the sizes of which are, from top to bottom, q − 3 4 , q − 3 4 , q − 3 24 , q − 3 8 , q − 3m 6 , q + 3m 6 . From Ward's table, we find that for any class g G ∈ {1 G , X G , J G }, χ(g) ∈ {0, 1, −1} for more than half of the irreducible characters χ of G. Since the classes 1 G , X G , J G all have size < |G|/|Cl(G)|, we conclude that θ ′ (G) > 1/2.The first step in verifying θ(G) > 1/2 is to compute the following table: Then with Table 1 and Ward's table in hand, a straightforward check establishes that, for each χ ∈ Irr(G), Verification of V and VI.Here, in Tables 2 and 3, we report the values of θ and θ ′ for sporadic groups and simple groups of order ≤ 10 9 .All values are rounded to the number of digits shown. Table 2 . The sporadic groups.
2,210.6
2020-03-30T00:00:00.000
[ "Mathematics" ]
System Analysis Of Inventory Information On Raw Material Companies In the era of increasingly rapid development of information technology, the use of computers in every Indonesian company is very important to support information needs. This is definitely mandatory, because with a computerized system all processes ranging from data processing to other important documents can be arranged neatly so that it can facilitate data storage and search. Problems faced in the inventory information system at the company are usually lacking an accurate, fast, and precise information system so that it is less optimal and produces all accurate and relatively long reports. The purpose of the research conducted by the author is to analyze the information system that runs on the raw material company. The methodology used is the system development life cycle approach starting from analyzing the system that runs through UML (Unified Modeling Language) to describe the running system analysis and analysis of the proposed system so as to improve the quality of the standard company. Introduction Developments in the business world can be characterized by the increasing number of companies engaged in industry, trade and services.Basically an established company has a purpose.The main purpose of a company is how companies can maintain their survival and maintain business continuity in order to survive in competition [1].In a company, large and small agencies, there is always a supply, especially the inventory of goods to produce, with a good inventory information system that has a profound effect on the development and progress of a company or agency that is mainly engaged in production [2].Poor inventory information system will affect other aspects, such as lack of consumer or customer trust in the company. Inventory of goods in raw material companies has a very important role because the operation of the company depends on the availability of raw materials.Similarly, what happened at PT.The Bitung Indonesian Ship Industry that produces ships and company needs.No matter how good the system and procedures for the supply of raw materials carried out in a company without the existence of a controlling role is possible, irregularities will occur that will harm the company [3].Thus the role of internal control in the company is a concern for interested parties. The existence of good and regular internal control in managing the inventory of raw materials, the company leaders will obtain reports that are useful to improve the effectiveness of the company, also help in making policy decisions and accountability in leading the company [4].Internal control of raw material inventories is expected to create control activities for companies that are effective in determining the optimal amount of inventory owned by the company, preventing various violations and frauds that can harm the company, violations of policies applied to inventory, and provide physical security against inventory from theft and damage [5].In line with the times and the development of the need for information, system design is needed so that the information produced ATM e-ISSN: 2622-6804 p-ISSN: 2622-6812 v110 A System Analysis Of Inventory Information On Raw Material Companies (Wahyu Hidayat) meets the needs of the company and can also save time.The development carried out is by designing a stock control material information system, which is expected to provide information easily, quickly and accurately in accordance with the wishes of the user. Research Method In a study, of course, using the method of research to achieve the objectives also get reliable information needed by a researcher to carry out several stages in research.In this study only four (4) stages of the process to be used, i.e., identifying needs, planning, design review and prototype prototype. Identify needs, by taking a top and down approach to get a design or description of the teaching materials that will be presented on the website later.Furthermore, planning is the stage of analyzing the data that has been obtained from the results of identification needs.Then, Prototype Design is done to adjust between the needs of users and the system that was planned before being implemented in real terms.The last isa Prototype Review, carried out to improve the system if there is a discrepancy with user needs.The following is a list of literature used in this study. This paper uses the literature review method, to look for the theoretical basis of previous studies [6] which can be used for problem management.This method is used to gather information and data from multiple sources (literature), books, and journals for literature relevant to the needs of the writing of this paper [7].There are 6 (six) literature reviews that are used, there are: 1 . Thesis research at FACULTY OF INDUSTRIAL TECHNOLOGY OF AKPRIND YOGYAKARTA SCIENCE & TECHNOLOGY INSTITUTE conducted by Askin Setia Rinaldy (2009) with the title "COMPUTER PERIPHERAL DATA PROCESSING CLIENT / SERVER SYSTEM AT TOKO MATRIX COMP YOGYAKARTA".The purpose of the study was to design merchandise inventory applications.Applications are designed using visual basic.The inventory system stores a lot of data.Data in this system is managed using a database on a client / server basis, which is designed using entity diagram concepts and normalization.Communication between units (client / server), starts when the application is running or executed.When the client unit is active, and make transactions.Then the client unit will send a command to the server unit, the server will return the result that the client unit requested.The data that is processed on the server unit is the entire system database, while the one sent to the client is only a single result or data, not the entire database.[ In this study, it was explained that the use of dashboards that provide information on employee absenteeism can be used to make employees to work together to achieve the success of the company's business.The data displayed in the dashboard is in the form of interactive graph so that the graph can be performed from attendance from its employees.[13] From the 6 (six) literature studies above, the author gained knowledge about the scope of research, starting from the method to the writing of research reports.But through this research, the author wants to explain the variety of studies that can be seen from several perspectives, which can be used as references or used in the management of other researchers' problems. Problem Analysis The system and stock control material process that is currently running in a company has been running well but still requires a long time, the warehouse clerk must control the amount of material in the warehouse at any time.While the presentation of stock control material report data must always be updated with its physical inventory.Besides that, the recording of stock control material data still uses Microsoft Excel which causes frequent recording errors and takes a long time because it has to open a lot of other data. From the problems that have been explained, it can be concluded clearly the lack and advantages reflected in the mind map. Figure 1. Mind Mapping Analysis Explained in the picture above which is an analysis of the inventory system that exists in a company.With this, the company is expected to meet the company's needs and also save time and is easier, faster and more accurate. The analysis of the deficiency of the system Based on the analysis conducted by the researcher, a data presented will be wrong if the warehouse admin or warehouse leader does not perform a procedure control so that there is less cooperation.This is because the production person who will take the goods sometimes does not make a letter of goods and is not known by the warehouse clerk with this Lack of control for warehouse material. 1. below there are shortcomings that occur in the company: 2. The data input process that occurs in material acceptance reports still takes a long time because it still uses Microsoft Excel.3. The length of the process of making stock control material reports so that the data needed by the leadership becomes slow. Needs Analysis System Based on the analysis of the problems on the system that runs requires a longer time in inputting data and reports produced because the system is still using Microsoft Excel so that the time requires more time to produce a report needed by the leadership. So by designing a computerized system in the hope that it can help officers in inputting and will reduce errors or obstacles that occur, then the system needs to be: a. Can display reports and print in Microsoft Excel for data on receipt and expenditure of material from input results so that officers do not need to make reconciliation in making reports.b.Computerized system that can control easily and clearly, thus reducing errors that occur.c.Can provide accurate information so that information can be useful by officers and leaders.d.Can add new data or change data, so that the system can be fixed immediately if there are errors in inputting or not inputted. Solutions With the construction of a system that is needed by users by using visual-based applications because visual-based applications are familiar among community agencies.with a web-based system, web-based applications allow users to use data together at the same time.so that the system can be run on any operating system, and does not require high computer specifications to be able to use web-based applications. Conclusion B Based on the results of research conducted on the inventory information system in raw material companies, the research can draw conclusions as follows: 1.The system that runs at this time has not been able to facilitate employees in obtaining information, this is due to the length of the search process and the making of the report, because of the large number of documents needed, so the decision-making process becomes hampered.2. To design a computerized stock control material information system, which can facilitate stakeholders in producing reports needed for the decision making process by the warehouse head so that they can solve existing problems 3.The stock control material information system that is running still uses Ms. Excel starts from material acceptance from suppliers, taking material to production to produce material stock control reports.this causes delays in the processing of data, causing the information generated to be inaccurate. ATM Vol.3, No. 2 July 2019 : 108-113 8] 2. Research Scientific Journal on the University of Indonesia campus regarding Series technology.MAKARA Technology Series is a scientific journal that provides original articles about research knowledge and information or applications of contemporary research and development related to technology issues.This journal is a publication tool and a forum for sharing research products and developments in the field of technology.Every article for this journal must be sent to the editorial office.Complete information about how to send articles and writing guides is provided in each publication.Each article will undergo a selection process by related experts and or editors.Since 2010 this journal has been published semiannually (June and December).Article publication will not be charged.MAKARA Technology Series is the expansion of MAKARA Series B: Regional Science and Technology as the development of the University of Indonesia MAKARA Research Journal published since January.[9] 3. Thesis research at STMIK Raharja conducted by Widi Nugroho (2007), with the title an error in inputting data because high accuracy is needed so as to minimize errors.In the study, the authors proposed an information system using Visual Basic 6 and Microsoft SQL server 2000 applications.[10]4. The thesis research at STMIK AMIKOM YOGYAKARTA was conducted by Anita Manik (2010) with the title "ANALYSIS OF THE DESIGN AND IMPLEMENTATION OF INFORMATION SYSTEMS OF STORES FOR MORNING COFFEE YOGYAKARTA SALES DIVISION."The purpose of this study is to design an application for selling merchandise.Applications are designed using visual basic.The sales system saves a lot of data.Data in this system is managed using Ms.'s database.Access 2003, which was designed using the entity diagram concept and normalization.[11] 5. Thesis research at Gunadarma University conducted by Leonardi Winarsih (2009) with the title "Merchandise Inventory Application Using Visual Basic 6.0".The purpose of this research is to design merchandise inventory applications.Applications are designed using Visual Basic.The inventory system stores a lot of data.Data in this system is managed using a database, which is designed using the concept of entity diagrams and normalization.Tables generated by the database are suppliers, types of goods, buy, sell, and customer.Using the inventory section application can manage inventory printing reports directly.[12]6. Research conducted by Untung Rahardja, By Sholeh, and Fitria Nur setianingsih with the title "USE OF DASHBOARD TO CONTROL PERFORMANCE OF EMPLOYEE ATTENTION TO IMPROVE EMPLOYEE PROFESSIONALISM IN PT .SINARMAS LAND PROPERTY". "Warehouse Spare Parts Inventory Information System Design at PT. KMK Global Sports ".This study discusses the inventory process that is still running using a semicomputer system, but in the storage of data is still not structured because of that there v111 e-ISSN: 2622-6804 p-ISSN: 2622-6812 ATM Vol.3, No. 2 July 2019 : 108-113 is still
2,973.6
2019-07-26T00:00:00.000
[ "Business", "Computer Science" ]
Re-identication risk prediction paradigm using incomplete statistical information and recursive hypergeometric distribution Abstract 18 Today we are living in an era of data explosion 1 . We have easier access to information services than at 19 any time in history, but we also face unprecedented privacy risks because your service providers are 20 extremely likely to know you better than you do 2,3,4 . Although service providers often allege that they 21 have to collect as much personal data as possible to improve user experience, they fail to properly protect user privacy 5,6 . In this regard, the government of many countries promulgated privacy protection laws, 23 such as the General Data Protection Regulation 7 (GDPR) in Europe and Personal Information Security 24 Specification (PISS) in China 8 . PISS emphasizes that all collected personal data should be immediately 25 de-identified and stored separately from their profile data 9, 10 . However, even after the de-identification, 26 anonymized personal data still face re-identification risk and are vulnerable to linkage attacks launched by 27 either honest but curious data collectors or malicious hackers 11,12 . Therefore, the re-identification risk of 28 individual data not only reflects the privacy risk level of individuals but also supports regulators in 29 formulating privacy protection policies. Beyond this, it is difficult for individuals and regulatory agencies 30 to obtain the complete dataset maintained by service providers, and they can only infer the re-31 identification risk from the released incomplete statistical information. 32 The re-identification risk of an individual is closely related to her/his tuple frequency. The tuple 33 frequency is defined as the count of a specific data value combination, where a high tuple frequency 34 signifies a low re-identification risk. If an attacker has sufficient background knowledge for the linkage 35 attack, individuals will be re-identified by her/his unique data records with 100% probability. Therefore, 36 the uniqueness of individual data has attracted extensive research attention 13 . According to the 1990 and 37 2000 U.S. census data releases, it takes only three attributes, namely the date of birth, gender, and zip 38 code, to uniquely identify 87% and 63% of the population 14,15 . Montjoye found that it takes only four 39 spatiotemporal points in trajectory data to uniquely identify 95% of the individuals in the location dataset 40 and 90% in the credit card dataset 16,17 . By exploiting the uniqueness contained in the sampled data records 41 or statistical characteristics of datasets, a latent attacker can measure the uniqueness of individuals given 42 incomplete statistical information 18 and even recover the original personal data 19 . However, using the 43 uniqueness to describe the re-identification risk is sometimes inaccurate, because non-unique data records 44 can still be exploited to re-identify individuals from anonymized datasets with a certain probability 20 . 45 46 The attribute dependence of experimental datasets. 47 Inspired by k -anonymity 21,22,23 , we propose to leverage k -indistinguishability as an indicator to describe the re-identification risk of individuals. If the tuple frequency of an individual in an anonymized dataset is 49 not less than k, then this individual is k -indistinguishable. If the probability of a specific individual being k -indistinguishable can be derived for 2,3, k = , one can have a relatively more comprehensive 51 understanding of her/his re-identification risk. Unfortunately, given incomplete dataset information, the 52 state-of-the-art privacy risk research cannot determine the probability of an individual being k -53 indistinguishable when 2 k  . In light of this, the paper presents how to accurately predict the re-54 identification risk for a given individual with only the incomplete statistical information of the target 55 dataset. Specifically, given some statistical information, the probability mass function (PMF) of the RH 56 distribution can be used to estimate the frequency of the tuples containing strong dependent attribute 57 pairs. In real-world applications, an approximate distribution of the RH distribution is employed to 58 calculate the tuple frequency in an anonymized dataset for computational efficiency, and to further derive 59 the probability of an individual being k -indistinguishable in the target dataset. Our experiments use random 24 , demographic 25 , medical 13 , and educational 26 datasets, and the results show that for all involved 61 datasets, the average AUC of our proposed TFRR is 0.86~0.98, suggesting a high prediction accuracy. 62 For datasets containing strongly dependent attribute pairs, the value dependence knowledge is introduced 63 to rectify the prediction results and the average AUC reaches 0.95~0.98. Our research reveals a general 64 rule determining the distribution of the tuple frequency, which is applicable for all random datasets and 65 most real-world datasets and provides a concise yet effective tool for the re-identification risk prediction 66 of anonymized datasets. With the incomplete statistical information of the target dataset, both individuals 67 and regulators can easily use this tool to predict the re-identification risk. Beyond this function, one can 68 even predict the re-identification risk of submitting data to service providers according to their published 69 data formats, statistical information, and privacy protection plans, and accordingly question whether they 70 obey the existing privacy protection laws, so as to foresee and prevent privacy threats. 71 Considering dataset D is a table with columns representing attributes and rows representing data records. 72 Each cell in the table maintains the value of a particular attribute of a particular data record. A tuple is 73 defined as an ordered list drawing one value per attribute, to enumerate all possible cases of data records 74 in D , some of which may not appear in D . From the perspective of probability theory, the frequency of 75 a specific tuple in a target dataset follows the RH distribution (see Methods). However, the dependence 76 between the values in the tuple will affect the tuple frequency distribution. Therefore, we define value 77 dependence (see Methods) to describe the dependence between the value pairs of a tuple, and use the 78 value dependence knowledge of a particular tuple to rectify the prediction results. To grasp a general 79 understanding of the dependence between an attribute pair, we define the attribute dependence and 80 analyze the dependence between each attribute pair in experimental datasets (including random and real-81 world datasets). 82 The attribute dependence profiles an asymmetric relation between two attributes. The dependence of 83 attribute B on attribute A can be calculated as follows, The approximate RH distribution. 123 Because of the computational complexity of the RH distribution, we expect to find an approximate 124 distribution to reduce the computational burden. According to the analysis in Methods, we find that when 125 can be employed to approximate the RH distribution. To 126 have a clearer understanding of the difference between them, we randomly select many tuples from the 127 random datasets and use the PMFs of the two distributions to calculate the occurrence probability of these 128 tuples. The maximum probability distance (MPD) is used to measure the difference between the binomial 129 distribution and the RH distribution, defined as, 130 132 We use the same 64 parameter sets as in the previous experiment to generate random datasets, and 133 randomly select 1000 tuples from each dataset. The MPD between the binomial distribution and the RH 134 distribution is shown in Fig. 3. 143 The possibility of an individual being k -indistinguishable in random datasets. 144 We randomly select the data records of 1000 individuals from 64 random datasets and use Eq. 11 to 145 estimate the possibility of these individuals being k -indistinguishable (see Methods for details). The 146 result of binary classification is shown in Fig. 4. 147 The possibility of an individual being k -indistinguishable in real-world datasets. The knowledge of attribute dependence and value dependence can reveal the internal relation between 203 different attributes and values of data records, which are important indicators for the value distribution of data 29 . Existing privacy protection methods, such as differential privacy 30,31,32 , can hide the original data 205 while ensuring their availability by adding random noise regularly. Although we can obtain useful 206 information, such as the tuple frequency distribution and top-k data from the de-identified dataset 33 , the 207 value and utilization of the de-identified dataset are substantially reduced due to the impact of the added 208 noise on the attribute dependence and value dependence 34 . Therefore, we plan to study how to customize 209 the differential privacy budgets and noise generation methods according to the predicted re-identification 210 risk for specific individuals, and how to maximize the preservation of the attribute dependence and value 211 dependence information. In addition, trajectory datasets typically contain temporal and sequential location 212 data with strong attribute dependence 35 , which makes the binomial approximation ineffective in privacy 213 risk prediction. This also points out a new research direction for future work. 214 Considering (1 )  j jd X satisfies the j -RH distribution characterized by 1 , , , , j N j n n . 224 When 2  j , the PMF of the j -RH distribution can be obtained recursively as follows, Equation 4 can be interpreted as that, given that sub-tuple 229 236 Therefore, we consider that the hypergeometric distribution is only a special case of d -RH distributions The binomial approximation of the d-RH distribution. x is as follows, 247 Then the probability of record r matching tuple i x for k rounds is as follows, called as a strongly dependent value pair, and the threshold is set to 0.5 in this paper. 263 The frequency distribution of tuples with strongly dependent value pairs. 264 The physical significance of d -RH distribution can be summarized as follows. Let The frequency distribution of x in D can be approximated by B( , ) p n  . 289 The probability of a specific individual being k-indistinguishable. 292 Then the probability of p being k-indistinguishable when 2  k can be calculated as follows All simulations were implemented in Matlab. The source code to reproduce the experiments will be is 299 deposited in Code ocean or Github. 300
2,367.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
AN1284 attenuates steatosis, lipogenesis, and fibrosis in mice with pre-existing non-alcoholic steatohepatitis and directly affects aryl hydrocarbon receptor in a hepatic cell line Non-alcoholic steatohepatitis (NASH) is an aggressive form of fatty liver disease with hepatic inflammation and fibrosis for which there is currently no drug treatment. This study determined whether an indoline derivative, AN1284, which significantly reduced damage in a model of acute liver disease, can reverse steatosis and fibrosis in mice with pre-existing NASH and explore its mechanism of action. The mouse model of dietary-induced NASH reproduces most of the liver pathology seen in human subjects. This was confirmed by RNA-sequencing analysis. The Western diet, given for 4 months, caused steatosis, inflammation, and liver fibrosis. AN1284 (1 mg or 5 mg/kg/day) was administered for the last 2 months of the diet by micro-osmotic-pumps (mps). Both doses significantly decreased hepatic damage, liver weight, hepatic fat content, triglyceride, serum alanine transaminase, and fibrosis. AN1284 (1 mg/kg/day) given by mps or in the drinking fluid significantly reduced fibrosis produced by carbon tetrachloride injections. In human HUH7 hepatoma cells incubated with palmitic acid, AN1284 (2.1 and 6.3 ng/ml), concentrations compatible with those in the liver of mice treated with AN1284, decreased lipid formation by causing nuclear translocation of the aryl hydrocarbon receptor (AhR). AN1284 downregulated fatty acid synthase (FASN) and sterol regulatory element-binding protein 1c (SREBP-1c) and upregulated Acyl-CoA Oxidase 1 and Cytochrome P450-a1, genes involved in lipid metabolism. In conclusion, chronic treatment with AN1284 (1mg/kg/day) reduced pre-existing steatosis and fibrosis through AhR, which affects several contributors to the development of fatty liver disease. Additional pathways are also influenced by AN1284 treatment. Introduction Non-alcoholic steatohepatitis (NASH) is an aggressive form of non-alcoholic fatty liver disease (NAFLD) with an excess of fatty acids and triglycerides, lobular inflammation, hepatocyte injury, and fibrosis (1), accompanied by insulin resistance and oxidative stress (2).Insulin resistance promotes lipogenesis through an influx from adipose tissue of free fatty acids (FFAs) into the liver.Oxidative stress impairs fatty acid oxidation, compromising the liver's ability to use, store, and export FFAs as triglycerides (3).This causes apoptosis to hepatocytes through activation of signalregulating kinase, which upregulates MAP kinases JNK and p38 (4).Cell damage stimulates hepatic stellate cells to produce TGF-b that induces fibrosis by activating myofibroblasts (5).Other cytokines are produced by FFAs (6) through stimulation of Toll-4-like receptors on Kupffer cells and circulating leukocytes (7) and by the bacterial antigen, lipopolysaccharide (LPS).The concentrations of LPS in the circulation and liver of subjects with NASH are higher than those in controls (8). Most compounds tested in rodent models of NAFLD or NASH (9-13) (among others) were given with the initiation of the high-fat Western diet (WD), and thus, any effect they had is mainly preventive.A few compounds, each with a different mode of action, were able to ameliorate steatosis when given to mice several weeks after commencement of the diet: Firsocostat, an acetyl-CoA carboxylase (ACC) inhibitor, Tropifexor, an agonist of Farnesoid X receptor (FXR), and cinnabarinic acid, an endogenous agonist of the aryl hydrocarbon receptor (AhR) (14, 15).Firsocostat and Tropifexor arrested the development of fibrosis (15).Although all these drugs reduced steatosis in human subjects with NASH, they had no effect on fibrosis (16).Neither did the novel dual proliferator-activated receptor (PPAR) agonist Saroglitazar (17), although it had fewer adverse effects than other PPAR agonists in humans (18).Thus, there is still a need for safe, clinically effective drugs for treating NASH that can also halt development of fibrosis.The pathophysiology of NASH is complex and probably requires activation of multiple targets for more successful treatment against fibrosis (19). AN1284 [3-(indolin-1-yl)-N-isopropylpropan-1-amine 2HCl] is a novel drug with multiple actions.It inhibited cytotoxicity resulting from oxidative stress and reduced the release of proinflammatory cytokines in LPS-activated macrophages (20) by inhibiting phosphorylation of p38 MAPK and nuclear translocation of Activator protein-1 (21).In mice with acute liver injury caused by LPS/D-galactosamine injection, s.c.injection of AN1284 (0.25-0.75 mg/kg) prevented the elevation of TNF-a and plasma alanine transaminase (ALT) and reduced hepatic damage and mortality (21).Chronic treatment of BSK-db/db mice with type 2 diabetes by AN1284 (2.5 and 5 mg/kg/day) by s.c.implanted micro-osmotic-pumps (mps) before disease development prevented renal damage and reduced elevation of plasma ALT and hepatic fat accumulation, while preserving insulin sensitivity and pancreatic b cell mass (22). The current study examined the effect of AN1284 (1 and 5 mg/ kg/day) administered for 2 months by mps, on hepatic steatosis and fibrosis in mice with pre-existing NASH.This was produced by feeding for 4 months on a modified low-trans-fat Western-diet combined with low choline.RNA sequencing analysis (RNA-seq) confirmed that diet replicated several changes in cellular processes seen in humans with NASH.HUH7 human hepatoma cells were used to show that AN1284 decreased conversion of palmitic acid (PA) to lipid at concentrations compatible with those found in the liver in mice and to elucidate its mechanism of action. In vivo NASH studies in mice on WD Experiments were performed according to the guidelines of the Animal Care and Use Committee of the Hebrew University (NIH approval number OPRR-A01-5011).Male C57BL/6JOlaHsd mice, aged 4 weeks for NASH experiments and 6 weeks for the CCl 4 fibrosis model (Harlan; Ein Kerem, Israel), were housed (five per cage), in a pathogen-free unit under controlled 12-h light/12-h dark cycle and an ambient temperature of 21 ± 1°C and humidity 40%-50%.The cages contained Teklad Sani-chips (ENVIGO) bedding and two 2" small play tunnels for environmental enrichment.Male mice were selected because they develop a more severe form of the disease than females and have lower antioxidant enzymes (23). The mice were maintained for 2 months on WD (n = 30) or ND (n = 15) and weighed twice weekly.Then, under ketamine 100 mg/ kg/xylazine 10 mg/kg anesthesia, they were implanted with mps delivering saline, or AN1284 (1 or 5 mg/kg/day)/month (ND n = 5/ dose) (WD n = 10/dose) for the next 2 months (Figure S1A).A new pump was implanted under anesthesia in the second month.In a previous study, there were no significant differences in the effects of 2.5 and 5 mg/kg/day of AN1284 on the parameters measured in diabetic mice (22).Therefore, in the current study, we administered 1 and 5 mg/kg/day.At the end of the experiment, blood was collected by cardiac puncture under ketamine/xylazine anesthesia, Induction of liver fibrosis by carbon tetrachloride Mice (n = 25) that were fed with ND were injected i.p. with carbon tetrachloride (CCl 4 ) (0.5 mg/kg in corn oil) (Sigma), twice weekly for 7 weeks.Four controls were injected with saline (1 ml/ kg).Four weeks later, five mice injected with CCl 4 were sacrificed and the livers were examined to confirm the presence of fibrosis.The remaining eight mice were given saline by s.c.injection and seven others were implanted with mps delivering AN1284 (1 mg/ kg/day) for 3 weeks. While the current study was in progress, we completed an examination of the pharmacokinetics and metabolism of AN1284 in mice.Peak drug concentrations were similar in plasma and liver after s.c.injection, but nearly 50-fold higher in the liver when the compound was given orally (25).This suggested that oral administration should enable AN1284 to reduce hepatic damage.Therefore, AN1284 (1 mg/kg/day) was given to eight mice (four per cage) for 3 weeks via the drinking fluid, 4 weeks after they had developed fibrosis induced by CCl 4 injections.Ten others received normal drinking fluid.They were weighed once weekly, their fluid intake was measured twice weekly, and the concentration of AN1284 in the fluid was adjusted accordingly.Seven weeks after commencement of the CCl 4 injections, the mice were processed for histological and biochemical analyses as described below. Biochemical and histological analyses The livers were extracted as described in Ref (26).and their triglyceride content was determined using the Cobas C-111 bioanalyzer (Roche, Switzerland), normalized to wet tissue weight.Plasma ALT was measured by Reflotron chemical blood analyzer (Roche Diagnostics, Mannheim, Germany).Frozen liver was placed in an embedding medium and used for the measurement of hepatic fat content by Oil Red O (ORO) staining.The rest of the liver was fixed for 24 h in 4% formaldehyde solution (Bio-Heart Ltd., Jerusalem, Israel), induced in 70% ethanol and embedded in paraffin, cut into 5-µm slices, and stained with hematoxylin and eosin (H&E) for general damage.Fibrosis was assessed with Sirius Red (SR) (Sigma, 365548), collagen 4 (Col4) (Abcam, ab236640), and immunohistochemical staining with primary antibodies against a-SMA (Sigma, A2547).Antibodies against Ly6B (Bio-Rad, MCA771) were used for neutrophils and natural killer cells, F4/80 for macrophages (Bio-Rad, MCA497), CD3 for T cells (Bio-Rad, MCA1477), CD45R for B cells (Santa Cruz, sc-19597), CD36 (Abcam, ab133625), and iNOS (Abcam, ab3523).Histopathological analysis was performed by a light microscope using the program Cellsens Entry (Olympus, Japan).Macrophages, ORO, a-SMA, and SR were quantified in 12 random images at ×40 magnification.Using the ImageJ software, the colored area was calculated, normalized, and expressed as a percentage of the whole picture. Quantitative polymerase chain reaction RNA was extracted from snap-frozen liver tissues (miRNeasy Micro Kit, Qiagen), from six samples/group.Its quantity and integrity were checked (Nanodrop, spectrophotometer) and reversetranscribed into complementary DNA (qScript cDNA Synthesis Kit, QuantaBio).Genes were determined by PCR with an SYBR Green Kit and (QuantaBio) on the CFX384 Touch Real-Time PCR Detection System (Bio-Rad).The relative expression of target genes was normalized by hydroxyl methyl bilane synthase expression as an internal control.The primer sequences used are listed in Table 1. RNA sequencing analysis For RNA-Seq analysis, an Illumina Hi-seq sequencer was used to measure the differences in global gene expression between the experimental groups.Each sample generated approximately 70 × 10 6 reads at the length of 86 bases.Differential expression data of the whole transcriptome was subjected to Gene Set Enrichment Analysis (GSEA) with the corresponding human ortholog gene symbols.GSEA uses all differential expression data (cutoff independent) to determine whether a priori-defined set of genes show statistically significant, concordant differences between two biological states.The hallmark gene set collection from MSigDB (molecular signature database) was used for the analysis.For each comparison, all statistically significant, differentially expressed genes were subjected to pathway enrichment analysis using QIAGEN's ingenuity pathway analysis (IPA, QIAGEN Redwood City, www.qiagen.com/ingenuity),GeneAnalytics and EnrichR, and functions/diseases enrichment analysis by IPA. In vitro studies HUH7 human hepatoma cells were incubated for 24 h in medium containing BSA.To see whether AN1284 can reduce steatosis from an FFA by a direct action on liver cells, PA was added together with different concentrations of AN1284 for 24 h.Lipid content was quantified by ORO staining.Since RNA-seq analysis suggested that AN1284 could act via the aryl hydrocarbon receptor (AhR), we measured the effect of AN1284 on the nuclear translocation of AhR, by immunofluorescence intensity, 15 min after its addition to the cells.We used a specific antibody (Abcam, ab190797), analyzed its intensity with ImageJ, and normalized it to the control group.RT-qPCR was used to measure the target genes of AhR after 24 h: fatty acid synthase (FASN), SREBP-1c, acyl-CoA oxidase 1 (ACOX1), and cytochrome P450-1 (CYP1a1).siRNA for human AhR from TriFECTa Kit DsiRNA Duplex purchased from ITD was used to silence AhR.The reverse transfection of these siRNAs onto HUH7 cells was performed by means of TransIT-X2 Dynamic Delivery System (MC-MIR-6000, Mirus) according to the manufacturer's instructions.To examine the effect of siRNA on AhR expression, total protein was extracted from cells 48-72 h after transfection, and AhR protein levels were measured using Western blot (WB) with primary AhR antibody (Abcam, ab190797).The siRNA-transfected cells were treated with AN1284 and analyzed. Protein extraction and Western blotting Total protein extract was obtained by using Radioimmuno Precipitation Assay lysis buffer for five samples/group.Cell lysates containing 50 mg of total protein were then added to SDS-PAGE gels and transferred to Nitrocellulose membranes (Bio-Rad, 1704158).Membranes were blocked in 1% non-fat milk and incubated overnight at 4°C with primary antibodies, RXRa (Abcam, ab125001), and mouse anti-b-actin (MP Biomedicals, 691001).The signals were developed with an enhanced chemiluminescence solution (Bio-Rad, 1705060) and visualized on a Bio-Rad bioluminescence device.Band intensities were quantified using ImageJ and normalized to actin. Measurement of hepatic levels of AN1284 and its indole metabolite AN1422 Liver samples were homogenized (100 mg/ml) in phosphate buffered saline.Twenty microliters of internal standard (rivastigmine 750 ng/ml) and 20 µl of ultra-pure water were added.AN1284 and its oxidized metabolite, AN1422, were extracted and measured as described in Weitman et al. (25). Statistical analysis Studies were designed to generate groups of equal size whenever possible, and any variation in group size within an experiment was due to unexpected loss of an animal or sample for measurement.All statistical analyses were performed using GraphPad Prism 9.50 (GraphPad Software Inc., San Diego, CA, USA).Data were compared by the Kruskal-Wallis non-parametric method, followed by the Mann-Whitney post-hoc test if F achieved P < 0.05.Body weight changes over time were compared by a two-way repeated measures ANOVA using SPSS version 28.Data are expressed as the mean ± SD.A p-value of <0.05 was considered to be significant. Liver concentrations of AN1284 and its oxidized metabolite There were no significant differences in the hepatic concentrations of AN1284 after administration of 1 or 5 mg/kg/ day (37.9 ± 9.7 and 51.4 ± 12.0 ng/g), respectively, but those of the indole metabolite, AN1422, were significantly higher after the 5 mg/ kg/day dose (3.4 ± 1.1 vs. 9.4 ± 4.9 mg/kg). AN1284 attenuates liver steatosis During 4 months of feeding, mice on WD gained significantly more weight than those on the normal diet (p < 0.0001; Figure 1B).There were no significant differences in the weight gain between the three groups of mice on the WD.At this time, livers of mice on the WD showed significant hepatic fat content as showed with ORO staining (Figure S1B).During the last 2 months after implantation of the mps, there was still a significant difference in weight gain between saline-treated mice on a WD and those on an ND.AN1284 only significantly reduced weight gain at a dose 5 mg/kg/day (Figure 1B).After 4 months, the livers of saline-treated mice fed a WD showed extensive cell ballooning, inflammation (H&E), and fat accumulation (ORO) (Figure 1A).AN1284 (1 and 5 mg/kg/day) significantly decreased liver weight (Figures 1C, D), lipid content (Figure 1E), triglycerides (Figure 1F), and serum ALT (Figure 1G), despite the fact that they remained on the WD throughout the entire period.AN1284 also reduced hepatic cell ballooning and inflammation (Figure 1A).Additionally, we checked whether AN1284 has any effect in mice fed a ND.Neither dose of AN1284 had any significant effect on body weight, liver weight, ALT, and oil red content. AN1284 attenuates liver fibrosis Moderate pericellular fibrosis in the livers of mice on the WD was demonstrated by an increase in staining with SR and Col4 and by TGF-b1 mRNA levels (Figures 2A-D Frontiers in Endocrinology frontiersin.orgkg/day) decreased SR.TGF-b1 mRNA was significantly reduced by 1 and 5 mg/kg/day but Col4 was significantly reduced only by a dose of 1 mg/kg/day.Since the degree of fibrosis was only moderate in the mice on WD, we performed additional experiments to assess the effect of AN1284 in mice on ND in which liver fibrosis was induced by injections of CCl 4 during a 7week period.Fibrosis, assessed by SR and a-SMA staining, was already present at 4 weeks (Figure 2E).SR intensity increased significantly by 7 weeks.AN1284 (1 mg/kg/day) given by mps or orally, started after 4 weeks of CCl 4 injections when fibrosis was clearly present, decreased the levels of SR, but a-SMA was only reduced significantly after oral administration.The results indicate that AN1284 is able to halt the progression of liver fibrosis (Figures 2E-G).Frontiers in Endocrinology frontiersin.org AN1284 reverses hepatic gene expression related to liver diseases We used RNA-Seq analysis to elucidate the influence of AN1284 on WD-induced hepatic gene expression profile.This enabled us to identify the canonical pathways altered by both the WD and drug treatment and to assess the differences in global gene expression between groups.Principal component analysis (PCA) showed that the six experimental groups could clearly be separated by the first two principal components (PC1 and PC2; Figure 3A).Compared to a ND, the WD significantly changed the expression of 4,600 genes [with a Base Mean (BM) >150].Those most changed by the WD and reversed by AN1284 are shown in Figure 3B.IPA and GSEA also revealed the top 20 pathways significantly altered by the diet that are associated with liver diseases (Figure 3C).The WD strongly activated pathways of hepatic steatosis and fibrosis and those encoding inflammation, oxidative stress, liver damage, and liver necrosis.All were significantly altered by AN1284 treatment, together with liver metabolism and elevation of the (FXR)/retinoid X receptor (RXR) and liver X (LXR) receptors (Figures 3D, E).The xenobiotic metabolism and AhR pathways were also significantly altered by AN1284.IPA prediction of the up-or downstream regulators by AN1284 (Supplementary Table 1) indicated a role of several nuclear receptors (AhR, RXR, LXR, CAR, and FXR) and the inhibition of several cytokines and growth factors (i.e., TGF-b, TNF-a, IL-1b, and FGF).IPA pathways analysis suggested a decrease for AhR and an elevation of RXR and LXR (Figures S2-S4). Effect of AN1284 on LXR/RXR and FXR/ RXR pathways In recent years, the role of nuclear receptors in liver steatosis and NASH has been investigated.While some of them were initially characterized as xenobiotic receptors, subsequent observations have pointed to their equally important metabolic functions (27,28).FXR and LXR control metabolic processes abundantly expressed in the liver.IPA and GSEAs indicated that AN1284 treatment activated the FXR/ RXR pathway with a p-value of 20.7 and the LXR/RXR pathway with a p-value of 15.6 (Figure 3D, S2, S3).This was verified by WB analysis, which showed that levels of hepatic RXRa protein increased by the diet were further elevated by AN1284 (Figures 4A, B).Although RXRa protein levels did not change in the liver of mice, 7 weeks after CCl 4 injections, they were greatly increased by both routes of AN1284 administration (Figures 4C, D).The WD also increased the percent area of fatty acid translocase (CD36)-positive cells ( Figures 4E, F) and hepatic mRNA levels of FASN (Figure 4G) as suggested from RNA-Seq results (Figure S2).AN1284 (1 and 5 mg/kg/day) significantly decreased gene expression of CD36, ACC, and FASN. AN1284 switches hepatic immune response from pro-to anti-inflammatory Hepatic inflammation plays an important role in the progression of NASH.Since the AhR is involved in many inflammatory responses, including suppression of cytokine release in LPS activated macrophages (28), we examined whether AN1284 also influences hepatic inflammation.In the livers of saline-treated mice, the WD significantly increased the number of hepatic T cells (CD3), macrophages (F4/80), and B cells (CD45R), but not neutrophils (Ly6B; Figures 5A-E).It also increased hepatic gene expression of CCL2 (Figure 5F), a marker of immune activation.AN1284 (1 mg/kg/day), but not 5 mg, increased the number of neutrophils and further increased that of macrophages, B cells, and T cells (Figures 5A-E).AN1284 depressed CCL2 gene expression (Figure 5F) and increased that of IL-10 (Figure 5G). AN1284 reduces steatosis in isolated human hepatoma cells through AhR nuclear translocation Previous studies indicated that AhR acts as a "double-edged sword" in the progression of NAFLD, depending on the specific ligand (29).In order to determine whether AN1284 can have a direct effect on liver cells, we used a HUH7-human hepatoma cell line.The addition of PA/BSA complex to HUH7 cells for 48 h increased fat content (p < 0.0001).This was significantly reduced by AN1284 (0.21, 2.1, and 6.3 ng/ml) (Figures 6A, B).The concentrations were in the range of those found in the liver of mice treated with 1 and 5 mg/kg/ day.BSA alone had no effect on the measurements.Since the RNA-Seq results suggested AhR as an upstream regulator, we analyzed its nuclear translocation in the HUH7 cells incubated with PA, 15 min after the addition of AN1284, and found this to be increased by AN1284 (6.3 ng/ml) (Figures 6C, D), together with upregulation in the expression of AhR target gene CYP1a1 and also ACOX1 (Figures 6E, F) 24 h later.AN1284 also decreased SREBP-1c mRNA, the principal transcriptional regulator of FASN that was elevated by PA (p < 0.001, Figure 7E).Similarly, FASN mRNA was decreased by AN1284 (2.1 and 6.3 ng/ml), opposing the increase caused by PA addition (p < 0.001; Figure 7F).In order to confirm that AN1284 suppresses fat accumulation in HUH7 cells through AhR, we silenced the receptor by using siRNA.AN1284 no longer reduced lipid in cells pre-treated with siRNA (Figures 7A, B).AhR protein levels were substantially reduced in HUH7 cells treated with AhR siRNA (Figures 7C, D).When these cells were incubated with PA and pre-treated with AN1284 siRNA, the levels of SREBP-1 and FASN genes remained elevated (Figures 7E, F).We also checked if AN1284 directly elevated RXR-a in the hepatoma cells.No change was observed in its protein levels in the WB analysis (Figures 7C, D). Discussion In our earlier study performed in diabetic mice (22), AN1284 was administered before the mice had any kidney or liver damage, in contrast to the current study in which drug treatment was only started after there were clear signs of hepatic steatosis and/or fibrosis.We now show that the WD given to mice for 4 months replicated much of the pathology in the liver of human subjects with NASH.It was supported by RNA-seq and IPA and GSEAs showing that the diet upregulated several of the major pathways affected in humans.These included hepatic steatosis, inflammation, fibrosis, hepatic cell proliferation, and oxidative stress.The findings were confirmed by direct measures of genes and proteins, which included significant increases in TGF-b1, Col4, and CD36, all of which are higher in humans with NASH (30)(31)(32).CD36 facilitates the intracellular uptake of FFAs and their esterification into triglycerides, while FASN catalyzes the last step in fatty acid biosynthesis and is believed to be a major determinant of lipogenesis (32,33). AN1284 (1 or 5 mg/kg/day), administered for 2 months by continuous release mps after commencement of the WD, reduced the deterioration of many of its deleterious effects, while the mice remained on the diet.This included the alterations in liver pathology, steatosis, and fibrosis and the percent of inducible nitric oxide synthase (iNOS)-positive cells (Figure S5), indicating that it was able to lower oxidative stress.AN1284 (1 mg/kg/day) also reduced the gene expression of pro-inflammatory factors TNFa and CCL2 and increased that of IL-10.CCL2 promotes fibrosis by recruiting pro-inflammatory monocytes (34).In the later, resolution stage of NASH, macrophages change their phenotype, expressing cytokines like IL-10 that suppress the proliferation and effector functions of CD4 + and CD8 + T cells and repair wound healing (35).AN1284 decreased SREBP-1c mRNA, while increasing that of ACOX-1, an enzyme found in peroxisomes and mitochondria, which oxidizes straight chain fatty acids like PA. Other studies have shown that inhibition of ACOX-1 or an abnormal ACOX-1 gene (36) can increase steatosis. Numerous nuclear receptors including FXR, LXR, RXR, and AhR have been suggested as regulators of NAFLD and NASH progression (27,28).RNA-Seq analysis points to the involvement of these nuclear receptors in the mechanism of action of AN1284.RXRa is a nuclear receptor that forms a heterodimer with other such receptors like FXR, LXR, and PPAR to promote cholesterol efflux.It helps to regulate glucose metabolism, apoptotic cell clearance, immune cell proliferation, and inflammatory gene repression (37).FXR is reduced in patients with NASH (38), and its levels of expression are inversely correlated with disease severity (39).When given either by mps or in the drinking water, AN1284 activated the FXR-RXR pathway and increased the levels of RXRa Using RNA-Seq to elucidate the mechanism of action of AN1284, we found that it reduced AhR mRNA levels and activated genes downstream of AhR.AhR signaling appears to be involved in immune-mediated diseases in humans (40).Depending on the particular cell type and the activating ligands, AhR was reported to have an anti-inflammatory and tissue-protective function in immune-mediated liver disease (41).Yet, the role of AhR in NAFLD remains controversial and appears to depend on the model used.In mice with constitutively activated human AhR given a WD, the level of steatosis was higher than in controls (42).However, stimulation of AhR with indole propionic acid, which shares some of the anti-inflammatory activity of indolines, but at higher concentrations (43), alleviated steatosis in mice on a WD (44).Moreover, activation of AhR in hepatic stellate cells prevented fibrosis induced by CCl 4 injections by blocking downstream genes required for fibrogenesis (45).Additionally, AhR was shown to play a role in the regulation of body mass in mice fed a WD (46).In a previous study, on db/db mice, AN1284 arrested body weight gain at a dose of 5 mg/kg/ day only after 2 months of treatment and significantly increased total body fat oxidation (22).In the current study, both doses of AN1284 attenuated liver weight, but body weight gain was again only significantly decreased by a dose of 5 mg/kg/day.In a human hepatoma cell line incubated with PA, we found that AhR was translocated to the nucleus 15 min following the administration of AN1284.The expression of CYP1a1 gene downstream of AhR, was upregulated 24 h later, but AN1284 had no direct effect on protein levels of AhR and RXR-a (Figures 7C, D).On the other hand, genes related to the LXR pathway, FASN and SREBP-1c, were significantly reduced by AN1284 (2.1 and 6.3 ng/ml).Silencing AhR in the hepatoma cells confirmed that part of the direct actions of AN1284 is mediated through AhR activation. While one or the other of the two doses of AN1284 given in this study appeared to be more effective in altering some measures of liver pathology, there were no statistically significant differences between any of their effects.Neither did they produce significant differences in hepatic concentrations, but those of the indole metabolite were higher after 5 mg/kg/day.Although not measured in the current study, the hepatic concentrations of AN1284 after administration of 1 mg/kg/day in the drinking water that significantly reduced fibrosis in the CCl 4 model were similar to those achieved by administration of 2.5 mg/kg/day by mps (25). In conclusion, AN1284 given to mice for 2 months at doses of 1 and 5 mg/kg/day can mitigate the deterioration of hepatic damage, steatosis, and fibrosis caused by a modified WD, in part through the AhR nuclear receptor that controls several, independent processes that were shown to promote NASH in human subjects.The beneficial effect of AN1284 on liver pathology in NASH may be due to a combination of a reduction in liver weight, inflammation, oxidative stress, and fibrosis. 2 AN1284 FIGURE 2 AN1284 reduces hepatic fibrosis in mice on a WD or after CCl4 injection.(A) WD increases the area of Sirius Red (SR) and Col4 and TGF-b1 compared to that in mice on ND.Calibration bar, 20 mM.(B) SR staining is significantly decreased by AN1284 (1 and 5 mg/kg/day).(C) Col4 staining is significantly decreased by AN1284 (1 mg/kg/day) but not by 5 mg/kg/day.(D) TGF-b1 mRNA is significantly reduced by AN1284 (5 mg/kg/day) but not by 1 mg/kg/day.(E) SR and a-SMA staining in mice is increased 4 and 7 weeks after bi-weekly injections of CCl4, a model of liver fibrosis.Calibration bar, 20 mM.(F) SR staining is significantly reduced by AN1284 1 mg/kg/day given by mps or in the drinking fluid.(G) a-SMA staining is significantly reduced by AN1284 given in the drinking fluid.Significantly different from control, *p < 0.05; **p < 0.01, ***p < 0.001.Significantly different from CCl4 4 weeks, ‡p < 0.01; significantly different from saline, #p < 0.05, ##p < 0.01, ###p < 0.001. 3 FIGURE 3 Effect of WD and AN1284 treatment on hepatic gene expression.(A) PCA plot showing RNA-Seq samples analyzed by diet and treatment groups.The WD substantially alters gene expression, while AN1284 treatment returns it to that of mice on a ND.(B) Heat map of genes most changed by the WD and the effect of AN1284 (1 and 5 mg/kg/day) on them.Red and blue colors indicate high and low gene expression, respectively.(C) IPA showing the top 20 pathways involved in liver disease and function that were significantly elevated by the WD compared to ND.Values are expressed as -log (B-H p-value).(D) IPA showing the top 20 canonical pathways that were significantly altered in AN1284-treated mice on a WD compared to those treated with saline.Values are expressed as -log (B-H p-value).(E) GSEA showing the most significant, enriched pathways that were up-or downregulated by the WD and the change reversed by AN1284 treatment. 4 AN1284 FIGURE 4 AN1284 increases RXR-a protein levels and decreases CD36 and FASN in mice on a WD.(A, B) WB of RXRa in mice on a WD.WD increases RXRa protein levels, which are further increased by AN1284 (1 mg/kg/day).The 5-mg dose was not tested.(C, D) WB of RXRa in mice after injection of CCl 4 .CCl 4 alone has no effect on the levels of RXR-a, which were markedly increased by AN1284 (1 mg/kg/day) given by mps or in the drinking water.(E, F) Immunohistochemical staining of CD36-positive cells.The cells, indicated by red staining, are a significantly greater proportion of the area in mice on WD than on ND and are markedly reduced by AN1284 (1 and 5 mg/kg/day).(G) mRNA levels of FASN in mice on WD.FASN mRNA levels are increased by the WD and reduced by AN1284 (5 mg/kg/day). 1 mg/kg/day (p = 0.05).Significantly different from ND, **p < 0.01, ***p < 0.001; significantly different from WD + saline, #p < 0.05, ##p < 0.01, ###p < 0.001. 6 AN1284 FIGURE 6 AN1284 decreases lipid generation from palmitic acid (PA), in human hepatoma cells in culture, through AhR activation.(A) Representative ORO staining in HUH7 human hepatoma cells incubated with PA/BSA and treated with increasing doses of AN1284 for 24 h.(B) Percent area of ORO in hepatoma cells after 2 h.(C) Nuclear translocation of AhR, 15 min after AN1284 addition.(D) AhR nuclear quantitation 15 min after AN1284 addition.(E) CYP1a1 mRNA levels after 24 h.(F) ACOX1 mRNA levels after 24 h.Significantly different from control **p < 0.01; significantly different from BSA + PA, #p < 0.05; ##p < 0.01; ###p < 0.001.These concentrations of AN1284 are within the range found in the liver after chronic treatment at 1 mg/kg/day by mps in this study. TABLE 1 A . Mouse primer sequences used for qPCR. TABLE 1 B . Human primer sequences used for qPCR.
6,949.6
2023-08-16T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Vision-based Autonomous Landing of a Quadrotor on the Perturbed Deck of an Unmanned Surface Vehicle Autonomous landing on the deck of an unmanned surface vehicle (USV) is still a major 1 challenge for unmanned aerial vehicles(UAVs). In this paper, a fiducial marker is located on the 2 platform so as to facilitate the task since it is possible to retrieve its six-degrees of freedom relative-pose 3 in an easy way. To compensate interruption in the marker’s observations, an extended Kalman filter 4 (EKF) estimates the current USV’s position with reference to the last known position. Validation 5 experiments have been performed in a simulated environment under various marine conditions. The 6 results confirmed the EKF provides estimates accurate enough to direct the UAV in proximity of the 7 autonomous vessel such that the marker becomes visible again. Using only the odometry and the 8 inertial measurements for the estimation, this method is found to be applicable even under adverse 9 weather conditions in the absence of global positioning system. 10 Among different UAVs topologies, helicopter flight capabilities such as hovering or vertical take-off and landing (VTOL) represent a valuable advantage over fixed-wing aircraft.The ability of autonomously landing is very important for unmanned aerial vehicles, and landing on the deck of a un-/manned ship is still an open research area.Landing an UAV on an unmanned surface vehicle (USV) is a complex multi-agent problem [11] and solutions to this can be used fro numerous applications such as disaster monitoring [12], coastal surveillance [13,14] and wildlife monitoring [15,16].In addition, a flying vehicle can also represents an additional sensor data source when planning a safe collision-free path for USVs [17]. Flying an UAV in the marine environment encounters rough and unpredictable operating conditions due to the influence of wind or wave in the manoeuvre compare to land.Apart from above, there are various other challenges associated with the operation of UAVs.For example, the inaccuracy of low-cost GPS units mounted on most UAV and the influence of the electrical noise generated by the motors and on-board computers on magnetometers.In addition to this, the estimation of the USV's movements is a difficult task due to natural disturbances (e.g.winds, sea currents etc.).This poses difficulty for an UAV to land on a moving marine vehicle with a low quality pose information.To overcome these issues, the camera mounted on the UAV and commonly used during surveillance mission [18], can also be used to increase the accuracy of the relative-pose estimates between the aerial vehicle and the landing platform [19].The adoption of fiducial markers on the vessel's deck is proposed as solution to further improve the estimate results.To increase the robustness of the approach, a state estimation filter is adopted for predicting the 6 degrees-of-freedom (DOF) pose of the landing deck which is not perceived by the UAV's cameras.This work can be considered as the natural consequence of [20], in which the developed algorithm has been tested against a mobile ground robot, without any pitch and roll movements of the landing platform. In terms of the paper organisation, Section 1 presents the method existing in literature about autonomous landing for UAVs, while Section 2 introduces the quad-copter model, the image processing library used for the deck identification, the UAV controller and the pose estimation filter.In Section 3 three experiments, each with a different kind of perturbation acting on the landing platform, are presented and discussed.Finally, conclusions and future works are shown in Section 4. State of the Art Autonomous landing is until now one of the most dangerous challenges for UAV.Inertial Navigation Systems (INS) and Global Navigation Satellite System (GNSS) are the traditional sensors of the navigation system.On the other hand, INS accumulates error while integrating position and velocity of the vehicle and the GNSS sometimes fails when satellites are occluded by buildings.At this stage, vision-based landing became attractive because it is passive and does not require any special equipment other than a camera (generally already mounted on the vehicle) and a processing unit. The problem of accurately landing using vision-based control has been well studied.For a detailed survey about autonomously landing, please refer to [21][22][23].Here, only a small amount of works are presented. In [24] and [25] an IR-LED helipad is adopted for robust tracking and landing, while a more traditional T-shaped and H-shaped helipad are used respectively in [26][27][28][29].The landing site is searched for an area whose pixels have a contrast value below a given threshold in [30].In [31] a Light Imaging, Detection, And Ranging (LIDAR) sensor is combined with a camera and the approach has been tested with a full-scale helicopter.Bio-inspired by the honeybees that use optic flow to guide landing, [32] follow the approach for fixed-wing UAV.The same has been done in [33,34] showing that by maintaining constant optic flow during the manoeuvre, the vehicle can be easily controlled. Hovering and landing control of a UAV on a large textured moving platform enabled by measuring optical flow is achieved in [35].In [36], a vision algorithm based on multiple view geometry detects a known target and computes the relative position and orientation.The controller is able to control only the x and y positions to hover on the platform.In a similar work [37], the authors were also able to regulate the UAV's orientation to a set point hover.In [38] an omnidirectional camera has been used to extend the field of view of the observations.Four light sources have been located on a ground robot and homography is used to perform autonomous take-off, tracking, and landing on a UGV [39].In order to land on a ground robot, [40] introduces a switching control approach based on optical flow to react when the landing pad is out of the UAV's camera field of view.In [41], the authors propose the use of an IR camera to track a ship from long distances using its shape, when the ship-deck and rotocraft are close in.Similarly, [42] address the problem of landing on a ship moving only on a 2D plane without its motion known in advance. The work presented in this paper must be collocated among vision-based methods.Differently from most of them, given the platform used it relies on a pair of low resolution fixed RGB cameras, without requiring the vehicle to be provided with other sensors.Furthermore, instead of estimating the current pose of the UAV, in order to land on a moving platform we employ an extended Kalman filter for predicting the current position of the vessel on whose deck the landing pad is located.The estimate is forwarded in input to our control algorithm that update the last observed USV's pose and send a new command to the UAV.In this way, even if the landing pad is not within the camera's field of view any more, the UAV can start a recovery manoeuvre that, differently from other works, is taking the drone in proximity of its final destination.In this way it can compensate interruptions in the tracking due to changes in attitude of the USV's deck on which the pad is located. Methods In this section all the components used for accomplishing the autonomous landing on an USV are introduced.Initially, the aerial vehicle, together with its mathematical formulation, is described.Successively, the ar_pose computer vision library is presented.In the end, the controller and the pose estimation filter are discussed.A graphical representation of these components is depicted in Fig. 1 and a video showing the overall working principle is available online 1 . Quad-copter model The quad-copter in this study is an affordable ($250 USD in 2017) AR Drone 2.0 built by the French company Parrot and it comprises multiple sensors such as two cameras, a processing unit, gyroscope, accelerometers, magnetometer, altimeter and pressure sensor.It is equipped with an external hull for indoor navigation and it is mainly piloted using smart-phones and tablets through the application released by the producer over a WiFi network.Despite the availability of an official software development kit (SDK), the Robot Operating System (ROS) [43] framework is used to communicate with it, using in particular the ardrone-autonomy package developed by the Autonomy Laboratory of Simon Fraser University, and the the tum-ardrone package [44][45][46] developed within the TUM Computer Vision Group in Munich.These package run within ROS Indigo on a GNU/Linux Ubuntu 14.04 LTS machine.The specification of the UAV are as follow: • Dimensions: 53 cm x 52 cm (hull included); • Weight: 420 g; • Inertial Measurements Units (IMU) including gyroscope, accelerometer, magnetometer, altimeter and pressure sensor; • Front-camera with a High-definition (HD) resolution (1280x720), a field of view (FOV) of 73.5 • × 58.5 • and video streamed at 30 frame per second (fps); • Bottom-camera with a Quarted Video graphics Array (QVGA) resolution (320x240), a FOV of 47.5 • × 36.5 • and video streamed at 60 fps; • Central processing unit running an embedded version of Linux operating system; The downward-looking camera is mainly used to estimate the horizontal velocity and the accuracy of the estimation highly depends on the ground texture and the quad-copter's altitude.Only one of the two video streams can be streamed at the same time.Sensors data are generated at 200Hz.The on-board controller (closed-source) is used to act on the roll Φ and pitch Θ, the yaw Ψ and the altitude of the platform z.Control commands u = (Φ, Θ,Ψ, z) ∈ [-1,1] are sent to the quad-copter at a frequency of 100Hz. While defining the UAV dynamics model, the vehicle must be considered as a rigid body with 6-DOF able to generate the necessary forces and moments for moving [47].The equations of motion are expressed in the body-fixed reference frame B [48]: where V = [u, v, w] T and Ω = [p, q, r] T represent, respectively, the linear and angular velocities of the UAV in B. F is the translational force combining gravity, thrust and other components, while J ∈ R 3×3 is the inertial matrix subject to F and torque vector Γ b . The orientation of the UAV in air is given by a rotation matrix R from B to the inertial reference frame I: where η = [φ, θ, ψ] T is the Euler angles vector and s. and c. are abbreviations for sin(.) and cos(.). Given the transformation from the body frame B to the inertial frame I, the gravitational force and the translational dynamics in I are obtained in the following way: where g is the gravitational acceleration and F b is the resulting force in B, ξ = [x, y, z] T and v = [ ẋ, ẏ, ż] T are the UAV's position and velocity in I. Augmented Reality The UAV's body frame follows right-handed z-up convention such that the positive x-axis is oriented along the UAV's forward direction of travel.Both camera frames are fixed with respect to the UAV's body one, but translated and rotated in such a way that the positive z-axis points out of the camera lens, the x-axis points to the right from the image centre and the y-axis points down.The USV's frame also follows the same convention and is positioned at the centre of the landing platform. Coordinate frames for the landing systems.X lv represents the UAV's pose with reference to the local frame and, in the same way, X ls for the USV.X c 1 v and X c 2 v are the transformation between the down-looking camera and frontal cameras, respectively, and the vehicle's body frame.X mv and X ms are the pose from the visual marker to the UAV and to the USV, respectively.Finally, X sv is the pose from the USV to the UAV. Finally, it has been defined a local frame fixed with respect to the world and initialized by the system at an arbitrary location.In Fig. 2 the coordinate systems previously described are depicted. The pose of frame j with respect to frame i is now defined as the 6-DOF vector: composed of the translation vector from frame i to frame j and the the Euler angles φ, θ, ψ. Then, the homogeneous coordinate transformation from frame j to frame i can be written as: where i j R is the orthonormal rotation matrix that rotates frame j into frame i and is defined as: Fig. 3 offers a graphical representation of the problem studied: retrieving the homogeneous matrix H offers the possibility to calculate the UAV's pose with reference to the USV expressed as translation and rotation along and around three axis respectively. In this work, augmented reality (AR) visual markers are adopted for identifying the landing platform.As described in [49], "in a AR virtual objects super-imposed upon or composited with the real world.Therefore, AR supplements reality". The ar_pose ROS package [50], a wrapper for the ARToolkit library widely used in human computer interaction (HCI) [51,52], is used for achieving this task.The ar_pose markers are high-contrast 2D tags designed to be robust to low image resolution, occlusions, rotations and lighting variation.For this reason it is considered suitable for a possible application in a marine scenario, where the landing platform can be subject to adverse conditions that can affect its direct observation.In order to use this library, the camera calibration file, the marker's dimension and the proper topic's name must be defined inside a configuration file.The package subscribes to one of the two cameras.Pixels in the current frame are clustered based on similar gradient and candidate markers are identified.The Direct Linear Transform (DLT) algorithm [53] maps the tag's coordinate frame to the camera's one, and the candidate marker is searched for within a database containing pre-trained markers.The points in the marker's frame and camera's frame are respectively denoted as M P and C P. So, the transformation from one frame to the other is defined as follow: where M C H and C M H represent the transforms from the marker to the camera frame and vice versa, respectively. Using the camera's calibration file and the actual size of the marker of interest, the 6-DOF relative-pose of the marker's frame with respect to the UAV camera is estimated at a frequency of 1 Hz. For the current and the last marker's observation, the time stamp and the transformation are recorded. These informations are then used to detect if the marker has been lost and to actuate a compensatory behaviour. Controller In order to control the drone in a less complex way, the PID controller offered by the tum_ardrone package has been replaced with a (critically) damped spring one. In the original work of [46], for each of the four degrees of freedom (roll Φ, pitch Θ, the yaw Ψ and the altitude z ), a separate PID controller is employed.Each of them is used to steer the quad-copter toward a desired goal position p = ( x, ŷ, ẑ, ψ) ∈ R 4 in a global coordinate system.The generated controls are then transformed into a robotic-centric coordinate frame and sent to the UAV at 100Hz. In this paper, in order to simplify the process of tuning the controller's parameters, a damped spring controller has been adopted.In the implementation, only two parameters, K_direct and K_rp, were used to modify the spring strength of the directly controlled dimensions (yaw and z) and the leaning ones (x and y).An additional one, xy_damping_ f actor, is responsible to approximate a damped spring and to account external disturbances such as air resistance and wind.The controller inputs are variations in the angles of roll, pitch, yaw, and altitude, respectively denoted as u Φ ,u Θ , u Ψ and u z , defined as follows: where c_rp and c_direct are the damping coefficients calcuolated in the following way: Therefore, instead of controlling nine independent parameters (three for the yaw, three for the vertical speed and three for roll and pitch paired together) the control problem is reduced to the three described above (namely K_direct, K_rp and xy_damping_ f actor). The remaining controller parameters are platform dependent variables and they are kept always constant during all the trials.Ignoring droneMass which does not require an additional description more than its name, max_yaw, max_gaz_rise and max_gaz_drop limit the rotation and linear speed on the yaw and z-axis, respectively.In the end, max_rp limits the maximum leaning command sent. The controller's parameters are the same across all the experiments performed and they are shown in Table 1.The K_rp parameter, responsible to control the roll and pitch behaviour, is kept small in order to guarantee smooth movements along the leaning dimensions.In the same way, max_gaz_drop has been reduced to a value of 0.1 for decreasing the descending velocity.On the other hand, the max_yaw parameter, used to control the yaw speed, has been set to its maximum value because the drone must align with the base in the minimum amount of time possible.The others have been left to their default values. Pose estimation To increase the robustness and efficiency of the approach, an extended Kalman filter (EKF) has been adopted here for estimating the pose of the landing platform [54].In fact, it may happen the UAV lose the track of the fiducial marker while approaching and descending on it.In order to redirect the flying vehicle in the right direction, the EKF estimates the USV current pose that is then processed and forwarded to the controller.For estimation purposes, the odometry and inertial data are fused together to increase the accuracy [55,56].The state vector is defined as x = [x, y, z, φ, θ, ψ, ẋ, ẏ, ż, φ, θ, ψ], with x, y, z and ẋ, ẏ, ż representing respectively the global positions and velocity, and φ, θ, ψ and φ, θ, ψ the attitude of the vessel.Considering the sensor readings, the estimation process satisfies the following equations: where k represents a discrete time instant, F k is a kinematic constant velocity model, H k is the observation model, z k is the measurements vector, I is an identity matrix, Q k is the process covariance matrix and R k is the measurement covariance matrix. The working principle of the EKF in this case is detailed below: • the filter estimates the USV's pose at 50Hz and its encoding is saved in an hash table using the time stamp as key; • when the UAV loses the track, the hash table is accessed and the last record inserted (the most recent estimate produced by the filter) together with the one having as key the time stamp of the last recorded observation are retrieved; • the deck's current position with reference to the old one is calculated using geometric relationship; • the controller command are updated including the new relative position; The procedure described above is iterated until the UAV is redirected above the visual marker and can perceive it through its bottom camera. Methodology Algorithm 1 Landing Algorithm controller.send_commands(attutude_cmd)16: end while The following section explains how the algorithm 1 works.The code is publicly available on our repository 2 . The quad-copter flies using its fixed non-tilting frontal camera, approaching the landing site on the USV's deck identified only by a fiducial marker.This, which scope is to outline the landing area, has to be perceived during all the landing manoeuvre.This is a requirement for precise landing despite the state estimator can compensate interruption in observation.When a visual marker is detected, the image processing library computes the 6-DOF relative-pose between the marker itself and the UAV. The result is used to make the quad-copter approaching the marker with the right orientation.To obtain this result, a damped spring controller reduces the error on the x−, y− and z−axis and on the quad-copter's yaw.On attaining close proximity to the marker, the marker leaves the field of view of the frontal camera.This is due to hardware limitation of fixed non-tilting cameras.To overcome this problem, the video stream from the frontal camera is interrupted and acquired from the one located under the UAV and downward-looking.The quad-copter continues the landing manoeuvre keeping the marker at the centre of the second camera's FOV.Otherwise, a compensatory behaviour is adopted: the EKF estimates the actual position of the USV and the drone is redirected close to it while increasing its altitude.Increasing the altitude allows to enlarge the field of view of the bottom camera, that is quite limited.In this way, it is guaranteed that the marker will be soon perceived and centred by the aerial vehicle.When an experimentally defined distance from the marker is reached, the drone lands safely.This distance depends on the side length of the marker used.In fact, with a smaller marker it would be possible to decrease this value but it would become impossible to perceive the marker at longer distance.We found that a marker side length of approximately 0.30 meters represents a good trade-off for making the marker visible at long and close distance at the same time.As a consequence, we decide to use 0.75 meters as distance for starting the touchdown phase of the descending manoeuvre, during which the power of the motors is progressively reduced until complete shut-down.The use of visual markers allows the estimation of the full 6-DOF pose information of the aerial and surface vehicles.In this way, landing operations in rough sea condition with a significant pitching and rolling deck can still be addressed. Results and Discussion All the experiments has been conducted inside a simulated environment built on Gazebo 2.2.X and offering a 3D model of the AR Drone 2.0.To the scope of this work, the existing simulator has been partially rewritten and extended to support multiple different robots at the same time.The Kingfisher USV, produced by Clearpath Robotics, has been used as floating base.It is a small catamaran with a dimension of are 135 x 98 cm, that can be deployed in a autonomous or tele-operated way.It is equipped with a flat plane representing a versatile deck for UAVs of small dimension.On this surface a square visual marker is placed.Previous research demonstrated a linear relationship is existing between the side length of the marker and its observability.Therefore, we opted for a side length of 0.3 meters that represents a good compromise, making the marker visible in the range [0.5, 6.5] meters. The algorithm has been tested under multiple conditions, namely three.In the first scenario, the USV is subjected only to a rolling movement while floating in the same position for all the length of the experiment; in the second scenario, the USV is subjected only to a pitching movement; while in the last scenario the USV is subject to both rolling and pitching disturbances at the same time.Fig. 4 illustrates the rotation angles around their corresponding axis.In all the simulations, the disturbances are modelled as a signal having a maximum amplitude of 5 degrees and a frequency of 0.2 Hz.Rolling and pitching of a vessel generate upward and downward acceleration forces directed tangentially to the direction of rotation, which cause linear motion knowns as swaying and surging along the transverse or longitudinal axis respectively [57]. Rolling Platform In this subsection, the results of a landing manoeuvre on a rolling floating base are reported.In particular, Fig. 5 illustrates the UAV and the USV's trajectory, respectively in blue and red, in the UAV's reference frame; while Fig. 6 and Fig. 7 show the controller commands and the salient moments of the manoeuvre respectively. The marker has been successfully recognised at a distance of 3.74 meters in front of the UAV, and at 0.09 meter on its left.The displacement on the z−axis, used as reference for the altitude, was of 0.84 meter instead.The UAV, with the parameters reported in the previous table 1, has been able to complete the landing in 25 seconds. The quad-copter approaches the landing base trying to keep it at the centre (in a range of ± 10 degrees) of its camera's FOV.In the case the marker leaves this interval of tolerance, the UAV would rotate around its z−axis in order to centre it again.The approach continues until the UAV's low altitude prevents the marker to be seen from the frontal camera, as shown in Fig. 7-a (t = 10s).At this point, the video stream is switched from the frontal camera to the one located at the bottom of the quad-copter and looking down, and new commands are generated and sent.The UAV is instructed to move towards the last known position of the landing platform but increasing its altitude in order to enlarge the area covered by its bottom camera.At t = 15s, as represented in Fig. 7-b, the UAV is located exactly above the marker and it can now complete the landing phase: it descends while trying to keep the marker at the centre of its FOV, as shown in Fig. 7-c.Small velocity commands are sent on the leaning direction (x and y, respectively) in order to approach the final position with high accuracy. Finally, at t = 25s the UAV reaches the minimum altitude required to shut-down its motors and land on the platform (Fig. 7-f). The commands generated from the relative-pose between the UAV and the landing platform's frame are illustrated in Fig. 6.Here, the controller's commands are plotted against the perception from the camera.As it is possible to see in the figure, for most of the travel the two curves of the commands and of the observations overlap perfectly.When they do not, the marker is lost and the UAV actuates the compensatory behaviour: the estimation filter's output, namely the USV's predicted pose, is combined with the latest vision observation in order to generate new commands for the UAV. In this way it is possible to explain changing in roll, pitch and altitude in the graph.Since the UAV has the same yaw of the floating base, namely they have the same orientation along the z-axis, no rotation commands are issued for this degree of freedom. Few words are reserved for the pitch's data between t = 18s and t = 22s, and the gaz's ones between t = 5s and t = 8s.In this case, the offsets are below a user-defined threshold and a null command is sent instead.The use of a threshold has been introduced for speeding up the landing phase: while testing the controller, it was noticed the UAV spent a lot of time while trying to align perfectly on the three axis with the centre of the landing plane, sometimes without any success.This has been identified as a limitation of controllers with fixed values parameters and a new more versatile solution is already planned as future work. Pitching Platform In this subsection an experiment with a pitching floating platform is reported.As before, the time for completing the landing manoeuvre is not considered as key-factor but the attention is on the ability of the UAV to approach and land on the USV with high precision.As in the previous experiments, the two vehicles 3D trajectory are reported in Fig. 8 in the UAV's reference frame, the controller commands in Fig. 9 and example frames in Fig. 10.The quad-copter, with the same controller parameters of before, was able to follow and land on the visual marker in almost 34 seconds after identifying it 4.46 meters ahead and 0.12 meter on its left. As in the case of a rolling base, Fig. 10-a shows the UAV starts moving in order to keep the visual marker at the centre of its frontal camera's field of view.This is what happens at time t = 26s and shown in Fig. 10-b.At t = 6s the UAV reaches its minimum altitude and it is now impossible for it to see the visual marker, as illustrated in Fig. 10-c.At this point, the video stream starts to be acquired from the bottom camera and the USV's estimated position is sent to the controller.At the same time, instructing the UAV to increase its altitude to augment the total area covered with its downward-looking camera.Doing this, at t = 13s the UAV is located exactly above the USV.The landing base is at centre of the camera's FOV, therefore a null velocity command is sent to stop the USV.Fig. 10-e and 10-f show the UAV can then descend slowly to centre the marker properly and, in the end, land on it. Further analysis can be done with the results reported in Fig. 9.In the same way of the experiment with a rolling deck, the curve of the controller's commands and the one related to the offsets overlap for most of the time.All the considerations made before still hold: while the marker is lost, the EKF is able to estimate the landing platform's current pose with reference to the instant of time when the marker has been lost.This relative-pose is added to the last observation in order to produce a new command. This is what is possible to see in the plot between t = 21s and t = 25s.Here, the two curves differ: while all the offsets remain constant because no new marker observations have been done by the UAV, the commands (gaz and roll) slightly change.The plot is now discussed in more details.While the yaw and the pitch commands remain identical to 0 because the UAV is already aligned with the landing base (within the predefined bounds), the UAV's roll command is changed including at every instant the new relative-pose (changing on the longitudinal direction) of the USV. . Landing manoeuvre of a VTOL UAV on a USV subject to both rolling and pitching disturbances, in order to simulate complex marine scenarios. Rolling and Pitching Platform A last simulation has been done with a floating platform that is subject to both rolling and pitching stresses.The goal of this experiment is to test the developed landing algorithm against simulated harsh marine conditions. The results are reported in Fig. 11, showing the both vehicles trajectories along a 23 seconds operation. The UAV successfully accomplish the landing maneovre starting from an initial marker's identification 3.71 meters in front of it and 0.30 meters on its left.Fig. 12 shows the comparison between the offsets obtained through the vision algorithm and the commands sent to the controller.It is possible to see that, as in the previous experiments, the curve of the offsets and the one related to the commands mainly overlap.All the analysis made before are still valid, but it is interesting to notice how the framework proposed is able to react properly also when the landing platform is subject to complex disturbances.The salient moments of the flight are illustrated in Fig. 13 4 . Conclusion and Future Directions In this paper, a solution to make an unmanned aerial vehicle to autonomously land on the deck of a USV is presented.It resides only on the UAV's on-board sensors and on the adoption of visual marker on the landing platform.In this way, the UAV can estimate the 6-DOF landing area position through an image processing algorithm.The adoption of a pose estimation filter -in this case an extended Kalman filter -allows to overcome issues with fixed non-tilting cameras and the image processing algorithm.Not involving GPS signals in the pose estimation and in the generation of flight commands, allows the UAV to land also in situations where this signal is not available (indoor scenario or adverse weather conditions). Figure 1 . Figure 1.Different components are integrated for achieving autonomous landing on the deck of an unmanned surface vehicle. Figure 3 . Figure 3.The image processing algorithm estimates the distances between the UAV and the visual marker. Figure 4 . Figure 4.The movements around the vertical, longitudinal and lateral axis of the USV are called yaw, roll and pitch respectively. Figure 5 . Figure 5. Above: The UAV and USV 3D trajectories, in blue and red respectively, in the UAV's reference frame.Bottom: The roll disturbances the USV is subject to. Figure 6 . Figure 6.Controller commands and visual offsets in the experiment with a rolling landing platform. Figure 7 .Figure 8 . Figure 7. Landing manoeuvre of a VTOL UAV on a USV subject only to rolling disturbances. Figure 9 . Figure 9. Controller commands and visual offsets in the experiment with a pitching landing platform. Figure 10 .Figure 11 . Figure 10.Landing manoeuvre of a VTOL UAV on a USV subject only to pitching disturbances. Figure 12 . Figure 12.Controller commands and visual offsets in the experiment with a pitching and rolling landing platform, in order to simulate complex marine scenarios. Table 1 . The controller parameters used in the simulation performed.
7,530.6
2018-04-14T00:00:00.000
[ "Engineering", "Computer Science" ]
A Compact Model to Evaluate the Effects of High Level C++ Code Hardening in Radiation Environments : A high-level C++ hardening library is designed for the protection of critical software against the harmful effects of radiation environments that can damage systems. A mathematical and empirical model to predict system behavior in the presence of radiation induced faults is also presented. This model generates a quick evaluation and adjustment of several reliability vs. performance trade-offs, to optimize radiation hardening based on the proposed C++ hardening library. Several simulations and irradiation campaigns with protons and neutrons are used to build the model and to tune it. Finally, the effects of our hardening approach are compared with other hardened and non-hardened approaches. Introduction Progressive technological down-scaling is reducing the natural resilience of circuits, implying greater susceptibility to radiation faults [1]. In the past, fault-tolerant microprocessors were required for systems working in harsh environments, such as satellites, aircraft, autonomous vehicles, or any kind of autonomous decision-making systems, but today they are increasingly in demand, even at ground level [2] where radiation induced soft errors can frequently occur. Soft-error radiation faults are produced by the effect of incident particles on circuits where, as a consequence, the digital state of a node can be modified (bit-flipping). The developers of critical systems are constantly searching for ways to improve and/or to maximize the reliability of critical applications, due to the presence of soft errors, that can lead to catastrophic failure situations. Many approaches are shown in the literature to minimize the effect of soft errors. Conventional approaches improve reliability by introducing redundancy at different hardware, software or hardware-software structures, in order to mask the wrong results by majority voting [3] or other redundancy-exploiting methods. For instance, it is common for hardware approaches to apply triple modular redundancy (TMR), to achieve reliability by replicating some physical components (rad-hard processors) [4]. Software-implemented hardware fault tolerance (SIHFT) techniques also introduce redundancy at instruction level by replicating several blocks of code [5] or several critical instructions [6]. Hardware-software hardening techniques, which reduce some weakness of the hardware or software-only techniques, are also possible [7][8][9]. Other recent approaches represent attempts to gain reliability improvements by introducing no modifications in either the application (code instrumentation) or in the system (specific components). These techniques seek to achieve improvements during the transformation from high level code (source code) to machine code (executable) by altering the code compilation method [10]. Each approach has its own advantages and disadvantages; for example, the disadvantage of producing unwanted overheads in processing time and storage needs can be achieved by applying software hardening techniques. When comparing the different approaches from the user perspective, there are two that require either high user intervention levels (most TMR-based approaches), or very little intervention (such as the approach proposed in this paper), or no user intervention and the delegation of hardening to some form of Artificial Intelligence (such as MOOGA [10]). The first approaches require lot of human effort, for instance, to change the focus of hardening. The third set requires a lot of CPU time to compare a large number of software versions, while the proposed approach can quickly explore several alternatives simply by changing the type and definition of each variable of interest, in a very fast operation. In this article we focus on the SIHFT techniques, because they can be implemented in commercial off-the-shelf (COTS) microprocessors, thereby avoiding any internal modification to the microprocessor. More precisely, we are interested in high-level instrumentation techniques capable of deriving the inherent trade-offs, while maintaining flexibility and usability. In view of the above, the key issues considered during the development of the new SIHFT approach presented in this article are: 1. The approach should be applicable to protect the largest possible amount of software, particularly the intellectual properties (IPs) commonly available on the Internet. 2. Post-compilation interventions must be as limited as possible (possibly none), in order to make software update and optimization fast and reliable. 3. The approach should apply to any COTS processor. It should not rely on any intrinsic radiation hardness of the processor except, obviously, the capability to withstand the total ionizing dose (TID). The chosen language, C/C++, is compatible with commonly used software development techniques, leaving aside the domain of modern iconic programming. The idea which addressed and solved all the above issues is based on developing a set of C++ classes aimed at protecting program variables and processor registers, mostly by means of TMR. In the following sections, a new high-level SIHFT technique will be presented, together with a reliability estimation model, to evaluate the impact of the system configuration parameters on program execution and radiation sensitivity. The model was developed from the results of two accelerated radiation campaigns conducted at the National Centre for Accelerators (CNA)-Spain, and Los Alamos Neutron Science Center (LANSCE)-USA. Automatic Hardening Approach Based on C++ Classes We propose a method that is intended for the protection of software code on COTS processors. In particular, it addresses the following elements of a COTS microcontroller system: 1. Numeric data stored in temporary and long-term storage locations. As it is a C/C++ level approach, there is no explicit distinction between registers and memory, although it provides overall data protection to data stored in the C/C++ variables, regardless of how and where these are allocated by the compiler; 2. 3. Program memory, mostly for situations where the program is stored in volatile or radiation-sensitive memory; Under certain circumstances induced by radiation, the microcontroller program may occasionally restart, which is considered acceptable, provided that the results produced at the end of execution are correct. In particular, some aspects of protection rely on inducing an automatic program reset when a SEFI is detected. It is worth noting that most benefits of the proposed approach may also apply to temporary faults induced by other causes, such as electromagnetic interferences, allowing technological transfer to other ground-based activities, such as functional safety in automotive electric/electronic systems, and detect and correct errors in high performance computing (HPC), among others. Using C++ Classes for Data Protection The proposed methodology is based on a C++ template class called TD<DataType> (standing for "triple data") which can be applied to any numeric DataType (e.g., TD<char>, TD<int>, TD<float>). A TD<DataType> class transparently protects, by means of TMR, a numeric variable of any given DataType. This class has been designed to allow total reuse of existing code, only changing the definition of all the variables to be protected, while maintaining the rest of the code unchanged. The internal architecture of a TD<DataType> (see Table 1) class contains three private variables (i.e., concealed from the user) of the type DataType, storing as many replicas of the same data (d1, d2, d3). A seamless use of the proposed class requires: (i) the appropriate re-definition of all possible numeric, comparison and logical operators; (ii) writing the code to implement each of them in a redundant way. For instance, for the assignment operator (=), the kernel of the code and its usage are showed in Table 2. Which implies that, despite the apparently identical usage of the assignment (a = b) to standard C variables, the usage of operator = of class TD<DataType> implies (in a transparent way) that the value of each replica of b is assigned to the corresponding replica of a. A similar approach applies to all algebraic operators (e.g., +, -, *, /), comparison operators, logical operators, etc. In our library, the casting operators to/from TMR data and plain data have been overloaded for transparent conversion between data types. Conversion from TMR to plain data implicitly implements majority voting, while conversion from plain to TMR implicitly implements triplication. The following simple example compares a simple piece of C code which sums up two variables and stores the result in a third variable (See Table 3). The same code is written in fully unprotected and partially protected ways, respectively, together with a possible manual protection. Table 3. Comparison of how to sum up two integer variables using different protection levels: original code, protection using our technique and manual TMR protection. Unprotected Protected Manual TMR int a, b; TD<int> a, b; \\protected TD<int> a, b; \\protected int a1, a2, a3, b1, b2, b3; int c; TD<int> c; \\protected int c; \\unprotected int c1, c2, c3; c = a + b; c = a + b; c = a + b; c1 = a1 + b1; c2 = a2 + b2; c3 = a3 + b3; As a consequence, by writing, c = a + b; the compiler automatically generates the code that will sum up and store each corresponding replica of the DataType in a completely transparent way and will preserve (by construction) the correspondence of each replica. A process that is quite unlike the TMR manual approach, which would be quite prone to coding errors. The TD<DataType> class is designed to support any operator and constructor (e.g., vectors and structures) commonly used inside C programs. The potential risks of pointers are normally to be avoided and they can be applied with greater safe by using, for instance, a TMR-protected TD<int*>, as redundancy significantly reduces the risk of pointer corruption. In the case of single-event upsets (SEUs) (or any other transient fault) affecting one of the three replicas, the original value can be recovered by majority voting, again in a transparent way. For instance, the simple piece of code of Table 4 Table 4. Example of triplicating and voting automation for two hardened and non-hardened variables. TD<int> a; int b; b = a; \\ majority vote a's replicas into b a = b; \\ triplicate b into a's replicas a = (int) a; \\ compact form: vote+TMR First converts and copies a redundant variable, a, into a non-redundant variable, b, by enforcing majority voting, and it then stores the three replicas of the voted value, b, back into variable a. In other words, it re-synchronizes the replicas by majority voting. The last line is a compact form which does exactly the same thing. Any existing program can therefore be hardened, by a mere redefinition of the variables used, while the active part of the code requires no single modification. This idea per se is not novel, as TMR is widely used to achieve data protection, but the way it has been implemented and optimized with respect to radiation tolerance is new and easy to use. Protecting Other Elements of a Program A complex program not only relies on data memory, which can be protected by means of the TD<...> class. Other elements have also been considered. Configuration registers cannot be triplicated in the same way as normal memory location, as they are unique in hardware and their TMR would require redesigning the manufacturing masks. Protecting the configuration registers is therefore supported by another type of class, called TDreg<...>, which automatically stores two other copies of the register in data memory and periodically re-synchronizes the hardware register by majority voting with the other two stored replicas. Periodical refresh can be implemented in different ways, depending on system and mission requirements. For instance: (i) a timer-driven interrupt routine which refreshes all variables, set at, for instance, every minute; (ii) at the beginning or at the end of each program loop (if any); (iii) by voting whenever a critical variable is used. Since configuration registers of commercial processors mix read-only and write-only bits, the definition of TDreg<...> supports this feature and synchronization is automatically limited to writable bits. Program memory should nominally be read-only, as it only contains machine code and numeric constants. We explicitly omit consideration of self-modifying codes, as those are considered too dangerous for critical applications. As a consequence of a nominally constant program code, its protection is limited to computing a "signature" of the code area, on a periodic basis, and verifying it against a golden sample. As soon as a SEU affects the program area, its signature will no longer match and the program will automatically be reset, downloading the program again from a more rad-tolerant ROM. This is implemented by means of the TDcode class. Internal control registers (namely internal state machines, program counter and stack pointer) are more difficult to protect and are the most common cause of program hang, therefore causing SEFIs. Our approach offers periodical verification of stack-pointer consistency, but the other control registers (e.g., program counter and status register) can be protected only to a very limited extent. The only means available to compensate SEFIs is the use of a watchdog timer (or equivalent methods) already commonly used in these situations. Yet Section 4.1 shows that protection of program counter and stack pointer will not usually improve hardness significantly. Interrupt handlers are normal routines that can be protected with the same techniques described above. In addition, interrupts also rely on interrupt enable bits, which are part of configuration registers; these can be protected by means of the TDreg<...> class described above. Performance Issues The use of TMR, on the one hand, significantly increases the hardness of a program to single-event effects (SEEs) but, on the other hand, it also impacts on aspects of performance, particularly speed and memory size. In theory, execution time should increase by a factor of three at most (the same as redundancy), although the increased flexibility and safety made available by the use of the C++ classes causes an additional overhead by another factor of two, on average, mostly due to the periodic necessity of majority voting. This overhead has been strongly optimized by means of the many features of state-of-the-art optimizing compilers, although it cannot be completely removed for several reasons. As a consequence, program execution, for a program with variables that are totally triplicated will take six times more time to execute, on average. An appropriate selection of which variables to protect and which ones need no protection significantly reduces the impact on program speed. Section 4.1 gives some hints on both how to select storage blocks and which specific variables to protect and which ones need not be protected, allowing a quick performance trade-off customized for specific mission requirements. A Compact Reliability Estimation Model During the process of hardening a piece of code (or even a complete HW/SW design), it is of the utmost importance to analyze a number of different configurations and to evaluate the impact of configuration parameters (e.g., data triplication, register refresh, error checking, register optimization, inlining, interrupts, etc.) on program execution and radiation sensitivity. We developed a mathematical and empirical model, for quick evaluation of several reliability vs. performance trade-offs and for the optimization of radiation hardening without excessively compromising performance. The model offers valuable advance information on system behaviour in the presence of radiation induced faults. Firstly, it predicts the occurrence frequency of faults that affect program execution for any combination of processor, high level language, compilation parameters, hardening techniques and selection of protected variables. Secondly, it estimates the impact of each storage area, variable or data structure on the overall reliability, to concentrate hardening efforts on the block that has the highest impact on radiation sensitivity. It is worth highlighting other works that either compare different methods with real radiation data (making the approach quite expensive and time consuming, and therefore ruling out the possibility of comparing large numbers of alternatives) and with simulated campaigns (cheaper and faster approach, but of lower reliability). The proposed approach is, instead, based on a compact parametric mathematical model, the parameters of which are first evaluated once and for all on real radiation measurements, then the model is applied in an iterative way, thereby permitting a wider search in the space of hardening alternatives. Model Preliminaries The model is based on cycle-accurate simulations using the OVPsim simulator [11] while randomly corrupting: registers (R), data memories (D), and program memory (P). Each storage block may have a different hardware implementation, so it may therefore have its own cross section per byte α R (respectively, α D and α P ); in addition, data storage may be distributed between a number of memories, each one having a different cross section α D1 , α D2 , α D... (e.g., FLASH, ferroelectric, static and dynamic RAMS). We assume that the cross section is different for each type of storage, and we relate each one to the basic cross section of main processor RAM (D1), that is α D1 ≡ α. We therefore state that: where K X are appropriate coefficients and, by definition, K D1 = 1 . In particular, K P , is the coefficient of either ROM or RAM, depending on where the program is executed. For each given processor, algorithm, language, compilation flags, hardening effort, etc., OVPsim simulations are set up to induce one random SEU per run of the compiled program, in either of the aforementioned storage blocks (R, D, P). The fault injection can be performed in each memory block at different abstraction levels. It means, for example, that we can induce an error in an SRAM or a DRAM device on any possible address from their available addressing space or only induce faults on single C/C++ variables of interest (vectors, matrices, . . . ). Injected faults are classified according to their effect on program behavior, in a similar way to the first proposals of Mukherjee et al. [12]. Faults which neither hang program execution nor affect expected program output are called unnecessary for architecturally correct execution (unACE). On the other hand, faults which visibly affect program execution are called architecturally correct execution (ACE), which comprise the two categories specifically considered in this paper: (i) faults which allow the program to terminate normally, but produce corrupted results, called silent data corruption (SDC); and, (ii) faults which cause abnormal program termination or infinite execution loops, called HANG. Each simulation set was configured to inject 1000 faults per register in the register file and 18,000 faults in the memory section allocated by the benchmark. This arrangement implies a total of at least 72,000 faults injected per program version, achieving a statistical error of ±1% at a 99% confidence level, according to the statistical model proposed by Leveugle et al. [13]. Model Description Simulations provide, as an output, the number of SDCs (respectively SD R , SD D1 , SD D2 , SD D... , SD P ) and HANGs (respectively HG R , HG D1 , HG D2 , HG D... , HG P ) out of R R program executions (respectively, R D1 , R D2 , R D... , R P ). The size of each storage area being S R words (respectively, S D1 , S D2 , S D... , S P ). For every configuration we define, for each storage area (where Z is either R, D1, D2, P, . . . ) the equivalent block size for SDCs of that area, expressed in bytes, as: We also define the equivalent block size for HANGs of that area, expressed in bytes, as: The two formulas provide two factors (β Z , γ Z ) which are proportional to: the increased sensitivity of the specific memory area (K Z ) to radiation; the failure probability as estimated by simulations ( SD Z R Z , HG Z R Z ) and the size S Z of the memory block, which is proportional to the probability of a particle hitting that block. From these factors, we can find the total equivalent size for SDCs and for HANGs of the whole program, respectively: The two additional parameters, β X and γ X , are the equivalent block size for SDCs and HANGs of the internal control unit and the state machines, which cannot be simulated by the OVPsim simulator and are therefore empirically estimated. System Reliability Given β TOT and γ TOT , our model predicts the probability of SDCs and HANGs per execution: where Φ is the radiation flux (particles/s/cm 2 ), while α is the cross section per byte (cm 2 /byte) of storage, and T E is nominal program execution time (s). The two expressions between brackets are called the size-time figures for SDCs (χ SDC = T E · β TOT ) and HANGs (χ HANG = T E · γ TOT ), respectively, of the given configuration. An innovative aspect of the proposed approach is that the size-time figures, χ SDC and χ HANG , mean that the impact of each data storage, each data structure, and even each individual variable on overall radiation performance can be easily assessed and the hardening efforts may be therefore be concentrated where the effect is highest and to reduce the impact of hardening to a minimum. Depending on the application, we can estimate, in the first place, the mean work to failure (MWTF) (i.e., the average number of program executions between two failures), in the following way: which depends on: (i) radiation flux Φ; (ii) processor's cross section α; and (iii) size-time figures of given program configuration (χ SDC or χ HANG ). In second place, for time-sampled systems, where the program starts every T S (sample time), executes over a certain time, T E , then stops until the next sample, we can compute the mean time to failure (MTTF), which is the average time between two failures: which also depends on sample time T S . Model Validation under Radiation The proposed model was evaluated against real radiation measurements. Table 5 shows the relevant model parameters for protons and neutrons measured during the two radiation campaigns described below. The model showed a good accuracy for the estimation of reliability. In fact, the figures shown in the last two columns of Table 6 have an error of −30% + 50% with respect to radiation measurements (not shown in the table). The device under test (DUT) selected for the irradiation experiments was the ZYBO board. The DUT is equipped with a 28nm CMOS Xilinx ZYNQ XC7Z010 system on chip (SoC). This SoC is divided into two parts, an FPGA area (programmable logic-PL) and a 32-bit ARM cortex A9 microprocessor (processing system-PS). The processor has a 13-stage instruction pipeline that includes a branch prediction block and support for two levels of cache. In addition, the microprocessor has a little built-in memory called on chip memory (OCM), where the bootloader or the program under test can be loaded. The DUT was controlled by an external computer, the RaspberryPi 3 Model B, the main task of which is to receive and log all the messages sent by the DUT. The DUT was configured to send a state message every five seconds in the absence of errors, otherwise the message is notified instantly and the external computer resets and reprograms the DUT. Tested programs present a rich variety of flow structures and data. For example, BubbleSort (BB) is a well-known sorting algorithm that achieves its objective by making use of several nested loops. The second algorithm considered here is the Dijkstra algorithm (DK), also known as the shortest path problem, which uses an adjacency matrix that is stored in the memory where the weights of all paths are located. Proton Irradiation Campaign The test campaign was carried out in mid-2018 at the National Centre for Accelerators (CNA), in Spain [14]. Irradiation tests were performed using the external beam line, installed in the cyclotron laboratory. Although the proton energy delivered by this cyclotron was fixed to 18 MeV, the beam was extracted to the air up to reach the DUT (device under radiation) position with 15.2 MeV energy. The flux fluctuated within ±5% during each run. Beam uniformity under these experimental conditions was better than 90% in the area of interest. Neutron Irradiation Campaign The neutron SEE campaigns were performed at the Los Alamos Neutron Science Center (LANSCE) in September 2018 [15,16]. The neutron beam was provided by a tungsten spallation source at approximately 30 degrees to the left of the main beam. During the campaign, the DUT remained at 23 m from the neutron source, and the beam was collimated, so that a spot was obtained in the order of 30 mm of diameter. This size covers the active area with uniformity better than 90%. A constant neutron flux of 1.7· 10 5 n/(s · cm 2 ), above 10MeV, was obtained. Reliability Issues In the last step of this activity, our model has been used to identify the most critical storage areas, variables and data structures, that is, those which most affect reliability, in order to concentrate hardening efforts on the most relevant areas. In addition, the performance of the proposed C++ classes against other optimization techniques proposed by the same authors in [10] was compared. The proposed C++ classes have been used to protect a variety of programs on an ARM Cortex-A9 processor and our model has identified the most critical storage areas which deserve more hardening effort. Some results are shown in Table 6, namely for a BubbleSort sorting algorithm and a Dijkstra shortest path finder algorithm, with both on-chip memory (OCM), an external rad-hard memory (EXT), as well as neutron and proton irradiation. All these results were also verified during the two radiation campaigns briefly described in Section 3.4 Performance Considerations We draw a few considerations here, which can be found by analyzing the results shown in Table 6, where a few C++ hardening configurations are compared with other configurations with hardening on specific aims [10]: mean work to failure (MWTF) maximization, fault coverage maximization (Max-ACE), trade-off optimization among execution time, memory size and fault coverage (Pareto), baseline compilation (O0) and code optimization (O3). All C++ versions were compiled using the -O3 optimization flag. We can observe that: Table 6. Execution time, T E ,(for 666MHz clock) plus equivalent block sizes for SDCs and HANGs and total time-size figures for a BubbleSort and a Dijkstra program, for different compilation flags, use of C++ classes vs. other hardening techniques, for four storage blocks (registers, data memory, stack, and code memory, taken as examples, for an ARM Cortex-A9 processor, using either on-chip memory (OCM) and external rad-hard memory (EXT). Highlighted values are those referenced in the text for the sake of clarity. • in the BubbleSort algorithm the influence of stack (β D2 and γ D2 ) is close to zero, therefore negligible with respect to the influence of other storage blocks (β R , γ R , β P and γ P ); in this situation, it is useless to protect the stack. In the Dijkstra algorithm, the influence of stack on SDCs (β D2 ) is comparable to that of data storage (β D1 ), at least for one configuration (DK-L3); in this situation, it might also be worth protecting the stack; • the use of C++ classes (BB-C14, BB-L4, DK-L3) increases execution time by a factor of between 2 and 10 times, depending on configuration (without considering BB-C11 which runs on an external, slower, rad-hard memory), but it reduces the influence of data memory on SDCs (β D1 ) by a factor of 100 and almost nullifies the influence of data memory on SDCs (β P ); the effects of the C++ classes on HANGs are negligible; • for configurations not protected by means of the C++ classes, the effect of registers on SDCs and HANGs is negligible despite the register's very high cross section (see K R in Table 5); when protecting the program by means of the C++ classes, the effect of registers (mostly for SDCs) BubbleSort is almost the only relevant one, therefore increasing protection requires an additional effort to protect the registers, which are not protectable by means of the C++ classes; • using an external, slower, rad-hard memory (configuration BB-C11, based on the proposed C++ classes, without cache) offers the lowest equivalent block sizes for all data storage (except obviously registers), despite it increasing execution time, T E , by a factor of 20 to 25. • by looking at the total size-time figures (two last columns), which are the most relevant overall parameter directly affecting MWTF and MTTF, the reduction of equivalent program size often counteracts an increase in execution time. The best performance for SDCs was achieved using the proposed C++ classes, while the best performance for HANGs was achieved with configurations BB-C3 and BB-C5. Optimization Process This section shows how an appropriate use of the compact model can rapidly optimize the usage of the C++ classes. We took as an example an optimized BubbleSort algorithm (different from the one used for Table 6) running at 666MHz on a Cortex A9 processor and irradiated by protons. We simulated the few configurations shown in Table 7, both for SDCs and for HANGs. Each row shows different configurations: first and second configurations are plain C code with no optimization (-O0) and highest optimization (-O3), respectively. Each column shows the equivalent size of : registers (REG); whole data memory (β D ); only the first, the second, and the third C variables of the program (β D,V1 , β D,V2 , β D,V3 , respectively); the other five variables were less relevant, taken together (β D,V4 ); program memory (PROG); the other three columns show the equivalent size, the execution time and the size-time figure of the whole program; the last two columns show the expected MWTF and MTTF for a given irradiation level (see caption of Table 7). From the table, it is, for instance, clear that the variable V2 for SDCs has by far the highest relevance (namely, highest size, 261B/388B) among all the C variables. It would therefore be worth hardening only that variable by means of the C++ classes. The hardening of other variables would add significantly to the execution time while reducing total equivalent size by a negligible amount. Consequently, one variable, V2, when hardened (by changing the data type to the proposed C++ class), yields the results shown in the third line of the table, which shows the lowest size-time figure χ SDC from among all the configurations. We also evaluated the fourth configuration of the table, for comparative purposes, by applying the C++ classes to all the program variables. It is clear that the configuration with only one hardened variable, V2, showed the best equivalent size (135 B) and size/figure performance (29 B·ms) from among all of them, despite the higher execution time (215 µs). The same configuration also shows the highest MWTF (about ten times higher than the -O0 and two times higher than the -O3 without the C++ classes; slightly lower for HANG) and MTTF (also about ten times higher than the -O0 and two times higher than the -O3 without C++ classes), proving the effectiveness of the proposed method. Table 8 shows the global MWTF and MTTF metrics (including both SDC and HANG). As can be seen, the configurations hardened by C++ classes provide the best overall reliability. Table 7. Equivalent sizes (β TOT and γ TOT ), size-time figures (χ SDC and χ HANG ) and reliability metrics (MWTF and MTTF) of a selected program (optimized BubbleSort) in a few different configuration. The individual impact of Registers (R), total data memory (D), individual memory variables (V1 through V3), other variables (V4), code area (P) and total (TOT), for an ARM Cortex A9 processor, with on-chip memory (OCM) tuning at 666MHz clock frequency. The last two columns refer to the estimated proton irradiation results with radiation flux of 5.45 × 10 5 particles/cm 2 /s and sample time T S = 20 ms. Highlighted values are those referenced in the text for the sake of clarity. Further Improvements It is clear from Table 7 that the proposed C++ classes significantly reduced the influence of data storage for SDCs and slightly reduced the influence of data storage for HANGs, although they significantly increased the execution time. The reason is that all the variables were protected for the configurations shown in Table 6. Nevertheless, the proposed approach can be individually used to address the effect of each variable, by splitting data storage into smaller blocks (D1, D2, D. . . ), namely one per variable or group of variables, and to evaluate the effect of each of them on execution time and equivalent program sizes. From this analysis, the best trade-off between what to protect and what not to protect can be assessed. Another parameter that can be addressed is the rate of data recovery; each data verification in the C++ classes takes time and data recovery takes even longer. Frequent verifications and recovery increase execution times, while less frequent verifications can increase the risk of double faults. A trade-off may also be established in this case, by means of the proposed approach. Conclusions A new hardening approach has been proposed on the basis of a set of C++ classes, to ease the protection of existing and new software programs. A simple though accurate reliability model has also been proposed, to support the optimization of the usage of C++ classes, and to compare the performance of those classes with the performance of other hardening methodologies. A relevant feature of this model is that it provides two compact figures (namely the size-time figure χ SDC and χ HANG ) that directly relate to the reliability figures (MWTF and MTTF for SDCs and HANGs, respectively), by taking into account both the increased computation time-typical of SIHFT-and the improvement in robustness-typical of TMR. The basic results showed that programs protected with the C++ classes were slower, but less subject to radiation-induced effects. Yet the two effects partially canceled out when considering the mean time or mean work between consecutive program HANGs, while the lower sensitivity to radiation was more relevant than the increase in execution time when considering the mean time or mean work between consecutive SDCs. It has been shown that a straightforward usage of C++ classes improved the reliability of a software system against corrupted results, but had less effect on program HANGs. A targeted application of the proposed C++ classes to specific variables significantly improved both effects. In conclusion, the use of appropriate C++ classes shown in this paper has greatly facilitated the use of TMR. Also, the availability of an easy-to-use performance estimation model could be used for quick and effective radiation tolerance optimization of the COTS microcontroller systems. Funding: This work was funded by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund through the following projects: 'Evaluación temprana de los efectos de radiación mediante simulación y virtualización. Estrategias de mitigación en arquitecturas de microprocesadores avanzados' and 'Centro de Ensayos Combinados de Irradiación', (Refs: ESP2015-68245-C4-3-P and ESP2015-68245-C4-4-P, MINECO/FEDER, UE).
8,073.8
2019-06-10T00:00:00.000
[ "Engineering", "Computer Science", "Physics" ]
Unraveling the Influence of HHEX Risk Polymorphism rs7923837 on Multiple Sclerosis Pathogenesis One of the multiple sclerosis (MS) risk polymorphisms, rs7923837, maps near the HHEX (hematopoietically-expressed homeobox) gene. This variant has also been associated with type 2 diabetes susceptibility and with triglyceride levels, suggesting its metabolic involvement. HHEX plays a relevant role as a negative regulator of inflammatory genes in microglia. A reciprocal repression was reported between HHEX and BCL6, another putative risk factor in MS. The present study evidenced statistically significant lower HHEX mRNA levels in lymphocytes of MS patients compared to those of controls, showing a similar trend in MS patients to the already described eQTL effect in blood from healthy individuals. Even though no differences were found in protein expression according to HHEX genotypes, statistically significant divergent subcellular distributions of HHEX appeared in patients and controls. The epistatic interaction detected between BCL6 and HHEX MS-risk variants in healthy individuals was absent in patients, indicative of a perturbed reciprocal regulation in the latter. Lymphocytes from MS carriers of the homozygous mutant genotype exhibited a distinctive, more energetic profile, both in resting and activated conditions, and significantly increased glycolytic rates in resting conditions when compared to controls sharing the HHEX genotype. In contrast, significantly higher mitochondrial mass was evidenced in homozygous mutant controls. Introduction Multiple sclerosis (MS) is a chronic, inflammatory, demyelinating, immune-mediated disease that affects approximately 2.5 million people worldwide, with an increasing prevalence [1]. The clinical manifestations and course of MS are variable: in most patients, reversible episodes of neurological deficits characterize the initial phase of the disease (relapsing-remitting) and, over time, permanent neurological deficits and progression of clinical disability are developed (secondary progressive). Around 15% of patients show a progressive disease course from onset (primary progressive). MS typically debuts between 20 and 35 years old and it is the first cause of non-traumatic neurological disability in young adults. The etiology of the disease is still elusive, but the most accepted model proposes a combination of genetic and environmental factors [2]. For decades, the major histocompatibility complex (MHC) was the only genetic factor related to MS susceptibility. More recently, genome-wide association studies (GWAS) allowed the identification of 233 MS risk variants accounting for around 50% of the total MS heritability [3]. One of these MS-risk 2 of 11 single nucleotide polymorphisms (SNPs), rs7923837, maps on chromosome 10q23.33, near the HHEX (haematopoietically-expressed homeobox) gene. The HHEX gene encodes an oligomeric protein that belongs to the homeobox protein family, mainly known for its role in embryonic development [4]. In fact, as it is a critical regulator of vertebrate development affecting different key pathways, HHEX knockout mice are not viable and die during mid-gestation [5]. Interestingly, HHEX null mice show cardiovascular, endocrine, liver, muscle, nervous system, and metabolic phenotypes, suggesting extensive multisystem roles for the protein product of this gene. HHEX is a versatile protein that regulates cell activity through different mechanisms, including DNA distortion [6]. As a transcription factor, HHEX binds either to tandemly repeated recognition sequences or to other transcription factors, and its relevant role as a negative regulator of inflammation-related genes in microglia has been recently reported [7]. By inhibiting the eukaryotic translation initiation factor 4E (eIF4E)-dependent transport, it regulates the translocation of different mRNAs from the nucleus to the cytoplasm, and it is the first homeodomain protein that modulates mRNA transport independently of its role as a transcription factor [8]. In this sense, even though eIF4E is broadly expressed in all eukaryotic cell types, HHEX limits its activity, being a tissue-specific regulator that maintains expression only in myeloid cells, lung, thyroid, and liver in adults [9]. In the present study, we aimed to elucidate the influence on MS pathogenesis of the risk polymorphism located in the 3 -flanking region at 28 kb of the HHEX gene, rs7923837. A recently published work showed that this polymorphism acted as an eQTL (expression quantitative trait locus) for HHEX in blood samples from a cohort of healthy controls, with the minor allele decreasing HHEX expression levels [10]. Therefore, we not only tried to replicate the described eQTL effect of rs7923837 on the HHEX expression levels in healthy controls, but also to examine its role in MS patients. The integrative analysis combining GWAS and eQTL results is accepted to provide clues to pinpoint candidate genes for these complex conditions. Nonetheless, even the perfect colocalization between eQTL and GWAS signals does not establish causality, and functional approaches are ultimately required. Moreover, another proposed MS-risk factor, BCL6, has been shown to directly bind the HHEX locus, and a reciprocal repression was evidenced, as BCL6 is upregulated in HHEX-deficient cells [11]. Therefore, we also aimed to investigate a potential epistatic effect between both genetic risk factors [12]. Given that HHEX is a key homeodomain transcription factor for the development of common lymphoid progenitor cells [13], and that an aberrant immune function underlies MS pathology [1], we pursued the study of peripheral blood mononuclear cells (PBMCs) as an accessible biological sample. Furthermore, an extensive bibliography has been published regarding the impact of the HHEX gene on type 2 diabetes susceptibility, even with the involvement of the exact MS-risk polymorphism already mentioned [14]. The studied HHEX polymorphism rs7923837 has been significantly associated with triglyceride levels by multiple linear regression analyses, and two other SNPs in the downstream region of the gene with total cholesterol levels [15]. Moreover, genome-wide association studies identified noncoding SNPs associated with type 2 diabetes and obesity in linkage disequilibrium (LD) blocks encompassing the HHEX gene [16][17][18]. These LD blocks contain highly conserved noncoding elements which overlap with the genomic regulatory blocks of the HHEX gene [19]. These results suggest the possible implication of HHEX in metabolic reactions and led us to explore the metabolic profile of immune cells isolated from MS patients and healthy controls. Since 2007, GWAS studies have established associations between SNPs and disease risk. However, they lack the resolution needed to ascertain causal variants, because SNPs are usually found in linkage disequilibrium with multiple protein-coding loci as well as with non-coding gene-regulatory elements that act over long distances [20]. Even though comprehensive, multi-layered, and integrative approaches applying artificial intelligence workflows have recently been published [21], further work is needed to fully characterize putative effector genes. To deepen the understanding of MS pathogenesis, functional studies such as ours are warranted to prioritize causal genes. Lymphocytes from MS Patients Show Lower HHEX Expression Than Those from Healthy Controls Statistically significant lower HHEX mRNA levels were observed in PBMCs from MS patients compared to healthy controls ( Figure 1A). As already mentioned, Ricaño-Ponce et al. [10] described that rs7923837 acts as an eQTL for HHEX in healthy controls. In accordance with these results, we found reduced levels of HHEX expression both in MS patients and controls carriers of the homozygous minor genotype rs7923837*AA (15% and 18%, respectively) when compared with carriers of the major allele (GG and GA), although these differences did not reach statistical significance ( Figure 1B). Moreover, this tendency was consistently observed when MS patients were stratified according to treatment in both interferon-β and glatiramer acetate treated patients. comprehensive, multi-layered, and integrative approaches applying artificial intelligence workflows have recently been published [21], further work is needed to fully characterize putative effector genes. To deepen the understanding of MS pathogenesis, functional studies such as ours are warranted to prioritize causal genes. Lymphocytes from MS Patients Show Lower HHEX Expression than Those from Healthy Controls Statistically significant lower HHEX mRNA levels were observed in PBMCs from MS patients compared to healthy controls ( Figure 1A). As already mentioned, Ricaño-Ponce et al. [10] described that rs7923837 acts as an eQTL for HHEX in healthy controls. In accordance with these results, we found reduced levels of HHEX expression both in MS patients and controls carriers of the homozygous minor genotype rs7923837*AA (15% and 18%, respectively) when compared with carriers of the major allele (GG and GA), although these differences did not reach statistical significance ( Figure 1B). Moreover, this tendency was consistently observed when MS patients were stratified according to treatment in both interferon-β and glatiramer acetate treated patients. Regarding protein levels, the role of rs7923837 in controls followed a parallel situation to that found for mRNA, with a decreased protein expression in rs7923837*AA minor allele homozygotes. However, this trend was not evidenced in MS patients ( Figure 1C). Enriched HHEX Nuclear Localization in rs7923837*AA Homozygous MS Patients The activity of HHEX as a transcription factor can be linked to its cellular location. Thus, the influence of the HHEX polymorphism rs7923837 on nuclear translocation was analyzed by confocal microscopy, and a distinct effect of this SNP was observed in MS patients and in healthy controls (Figure 2A,B). Carriers of the major allele showed a higher nuclear location of HHEX than rs7923837*AA homozygous controls (p = 0.047). In Regarding protein levels, the role of rs7923837 in controls followed a parallel situation to that found for mRNA, with a decreased protein expression in rs7923837*AA minor allele homozygotes. However, this trend was not evidenced in MS patients ( Figure 1C). Enriched HHEX Nuclear Localization in rs7923837*AA Homozygous MS Patients The activity of HHEX as a transcription factor can be linked to its cellular location. Thus, the influence of the HHEX polymorphism rs7923837 on nuclear translocation was analyzed by confocal microscopy, and a distinct effect of this SNP was observed in MS patients and in healthy controls (Figure 2A,B). Carriers of the major allele showed a higher nuclear location of HHEX than rs7923837*AA homozygous controls (p = 0.047). In contrast, the nuclear location of HHEX was significantly increased in rs7923837*AA homozygous patients when compared to MS carriers of the major allele (p = 0.01). Moreover, homozygous individuals for the risk genotype rs7923837*AA evidenced a significantly higher nuclear location of HHEX in MS patients compared to healthy controls (Control AA median = 0.56; MS AA median = 0.39, p = 0.005, Figure 2B). As already mentioned, a reciprocal interaction between BCL6 and HHEX has been reported [11]; therefore, we aimed to test whether the HHEX expression depends on the BCL6 MS-risk polymorphism. To this end, the levels of expression of HHEX were stratified according to the BCL6 rs2590438 genotypes ( Figure 2C). A statistically significant increase was observed for the BCL6 minor allele homozygous controls when compared to control carriers of the major allele (p = 0.004). In contrast, no difference was observed between MS subgroups, and a very significant difference was evidenced between minor allele homozygous patients and controls (p = 0.0036). Mitochondrial Metabolism The metabolic reprogramming of lymphocytes upon activation influences immune responses and ultimately affects disease progression. Consequently, we aimed to study the mitochondrial bioenergetics in the whole PBMC population, postulating that the crosstalk among the different immune subpopulations would determine the observed final outcome. Upon PHA activation, immune cells increase their metabolic demands and, in our experimental conditions, the MS homozygotes for the rs7923837*AA genotype showed the more energetic phenotype ( Figure 3A). When patients and controls stratified by HHEX genotypes were compared ( Figure 3B-D), significant differences were consistently unrevealed for the Control vs. MS AA subgroups in basal and maximal glycolytic capacities and in glycolytic reserve in resting conditions. In addition, the increment in mitochondrial mass after PHA stimulation pinpointed homozygous controls for the rs7923837*AA genotype, with significantly higher mass in global PBMCs ( Figure 3E), in the CD3 + ( Figure 3F) and CD3 − CD20 − (mainly NK cells, Figure 3H) subpopulations, but not in B cells, which evidenced a similar pattern in MS patients and controls ( Figure 3G). Mitochondrial Metabolism The metabolic reprogramming of lymphocytes upon activation influences immune responses and ultimately affects disease progression. Consequently, we aimed to study the mitochondrial bioenergetics in the whole PBMC population, postulating that the crosstalk among the different immune subpopulations would determine the observed final outcome. Upon PHA activation, immune cells increase their metabolic demands and, in our experimental conditions, the MS homozygotes for the rs7923837*AA genotype showed the more energetic phenotype ( Figure 3A). When patients and controls stratified by HHEX genotypes were compared ( Figure 3B-D), significant differences were consistently unrevealed for the Control vs. MS AA subgroups in basal and maximal glycolytic capacities and in glycolytic reserve in resting conditions. In addition, the increment in mitochondrial mass after PHA stimulation pinpointed homozygous controls for the rs7923837*AA genotype, with significantly higher mass in global PBMCs ( Figure 3E), in the CD3 + ( Figure 3F) and CD3 -CD20 -(mainly NK cells, Figure 3H) subpopulations, but not in B cells, which evidenced a similar pattern in MS patients and controls ( Figure 3G). Discussion Efforts to unveil the underlying causal genes and mechanisms involved in complex diseases are demanded. The present study is an attempt to refine the role of one of the described MS-risk variants near the HHEX gene, rs7923837. Discussion Efforts to unveil the underlying causal genes and mechanisms involved in complex diseases are demanded. The present study is an attempt to refine the role of one of the described MS-risk variants near the HHEX gene, rs7923837. Our results regarding HHEX mRNA expression in PBMCs showed significantly lower levels in MS patients than in controls ( Figure 1A). As reported, this HHEX polymorphism acts as an eQTL for the HHEX gene in blood samples of healthy subjects [10]. In accordance with this, in our cohorts, both controls and MS patients pointed to the previously described eQTL trend: the homozygous mutant genotype rs7923837*AA diminished HHEX expression ( Figure 1B). Cell type-specific transcriptomic and epigenomic maps aid in the interpretation of the potential regulatory impact of rs7923837. The information provided by the Roadmap Epigenomics Project (roadmapepigenomics.org, accessed on 17 May 2022) revealed that this variant maps in a region that presents low transcriptional activity in PBMCs, as shown by high levels of repressive H3K27me3 histone methylation and low levels of H3K36me3 (found in areas undergoing active transcription). Since no differences were found in HHEX protein expression levels associated with the rs7923837 genotypes ( Figure 1C), we aimed to analyze the subcellular location of HHEX by confocal microscopy (Figure 2A). Both HHEX activities, as a transcription factor and as a suppressor of eIF4E-mediated mRNA transport, take place in the nucleus and, therefore, a higher nuclear location is indirectly related to its activity. Homozygotes for the rs7923837*AA genotype displayed a significantly different cellular distribution of HHEX when compared to carriers of the major G allele, both in MS patients (p = 0.01) and controls (p = 0.047) ( Figure 2B). Moreover, while carriers of the HHEX major allele maintain a similar subcellular distribution in MS patients and controls, rs7923837*AA homozygotes showed an increased level of cytoplasmic HHEX in the control population, but an increment in nuclear HHEX in MS patients (p = 0.005). Considering the described interaction of BCL6-HHEX, we observed significantly different levels of HHEX expression between the subgroups of controls stratified by the BCL6 MS-risk polymorphism. This difference was lacking between the MS counterparts ( Figure 2C), suggesting a defective reciprocal regulation, which could have an impact on the disease pathogenesis. This BCL6-HHEX epistatic interaction does not exhaust other possible epistatic effects, i.e., HHEX knockdown considerably enhanced the expression level of eomesodermin (EOMES), another MS-risk factor identified by GWAS [3]. HHEX binds to the HHEX-response element located in the first intron of the EOMES gene [22], indicative of the intricate network lying behind these complex diseases. Emerging interest in metabolic reprogramming and its impact on lymphocyte activation led us to study the cellular bioenergetics in our cohorts stratified by the HHEX polymorphism with an already documented influence on metabolism. The immune system comprises a series of specialized cells able to rapidly respond to pathogens or inflammatory stimulus. Understanding how the metabolic activity influences immune responses and affects disease progression is of utmost importance in immune-mediated conditions such as MS. The bioenergetic profiling of lymphocytes has revealed that the cellular metabolism changes dynamically with activation. Upon antigen encounter, T cells undergo extensive proliferation and switch to a program of anabolic growth and biomass accumulation, with increased demand for ATP and metabolic resources. It has been described that HHEX is a key regulator of early lymphoid development and functioning [23]. Regulatory T cells (Tregs) play an essential role in maintaining the immune homeostasis and Tregs show lower expression of HHEX than conventional T cells [23]. As reported, HHEX directly binds to the promoters of Treg signature genes, such as Foxp3, Il2ra, and Ctla4, suggesting a role of HHEX as a Treg negative regulator. Specifically, the activity of the Foxp3 promoter is almost completely inhibited by HHEX binding, and Foxp3, Il2ra, and Ctla4 act as Treg-specific super-enhancers, which could easily be targets of the same transcription factor. Regulating energy metabolism provides a way for T cells to reversibly switch between quiescent and highly proliferative states. In our experimental conditions, the MS carriers of the HHEX homozygous mutant genotype showed a distinctive, more energetic profile, both in resting and PHA-activated conditions ( Figure 3A), and significantly increased glycolytic rates before PHA stimulation when compared to controls sharing the HHEX genotype ( Figure 3B-D). Most probably, the continuous exposure to the self-antigen(s) would be responsible for the observed pre-activated glycolytic engagement in MS patients. It is important to remember that aerobic glycolysis is the dominant pathway in effector T cells, while resting naïve T cells maintain low rates of glycolysis and predominantly oxidize glucose-derived pyruvate via oxidative phosphorylation (OXPHOS) or use fatty acid oxidation (FAO) for ATP production. This strong bias toward glycolysis over mitochondrial metabolism was also evidenced by the significantly higher increase in mitochondrial mass upon PHA activation in homozygous mutant controls as compared to MS patients with the same genotype ( Figure 3E,F,H), which did not seem to affect peripheral B cells ( Figure 3G). Interestingly, integrated single-cell transcriptomics has recently revealed strong germinal center (GC)-associated etiology of autoimmune risk loci. In fact, many genetic variants implicated in autoimmunity exhibit their greatest regulatory potential in GCassociated cellular populations, including BCL6 and transcription factors regulating B cell differentiation, such as POU domain class 2 homeobox associating factor 1 (POU2AF1) and HHEX [24]. The role of HHEX seems confined to the generation of GC-derived memory, and it is not involved in the maintenance or function of memory B cells [25]. The key role of GC in both adaptive immunity and peripheral tolerance by limiting autoreactive B cells explains why dysfunction in these processes can lead to defective immune responses and autoimmune diseases. These compartmentalized studies provide a complementary perspective to our approach, which seeks to evaluate the comprehensive lymphocytic outcome resulting from the immune cellular crosstalk. Furthermore, in the adult central nervous system (CNS), neurons possess a limited capacity to regenerate injured axons, limiting repair. In fact, activating pro-regenerative gene expression in CNS neurons is a promising therapeutic approach. HHEX is widely expressed in adult CNS neurons, but is present only in trace amounts in immature cortical neurons and adult peripheral neurons. HHEX overexpression in early postnatal cortical neurons reduced both initial axonogenesis and the rate of axon elongation, suggesting a role for HHEX in restricting axon growth in the developing CNS [26]. As mentioned, HHEX negatively regulates inflammation-related genes in microglia, opening provocative therapeutic avenues [7]. Altogether, these results provide clues for the interpretation of the genetic causes underpinning MS disease. Our work demonstrates how the HHEX genetic variant influences the subcellular location of the encoded protein involved in critical immune regulatory and metabolic profiles. Understanding the genetic bases of immune system regulation may have broad implications in the disease treatment. Study Population: Patients and Controls The study included a total of 154 MS patients (59.5% females) and 117 healthy controls (67.5% females), with mean ages of 40.9 ± 11.3 and 41.4 ± 10.1 years, respectively. Patients were diagnosed with relapsing remitting multiple sclerosis established according to Mc-Donald's criteria [27] which requires dissemination of lesions in space and time, resulting in an earlier diagnosis of MS with a high degree of both specificity and sensitivity. All patients were recruited from collaborating hospitals in the Madrid region during routine visits with their Neurology departments. Patients were treated with interferon-β formulations or glatiramer acetate and had no evidence of relapse before or after extraction. None of the control subjects reported first-or second-degree relatives with any immune-mediated disease. All participants were recruited after written informed consent, and the study was approved by the Ethics Committee from Hospital Clínico San Carlos. Peripheral blood samples were collected and PBMCs (peripheral blood mononuclear cells) were separated with Lymphoprep density-gradient centrifugation following the manufacturer's instructions (07851, Stemcell Technologies, Vancouver, BC, Canada); genomic DNA was extracted from the granulocyte phase following a salting-out procedure and was quantified and preserved at −20 • C; and PBMCs were cryopreserved in liquid nitrogen until further analysis. Genotyping Genotyping was achieved in 154 MS patients and 117 healthy controls by TaqMan technology on a 7900HT Fast Real-Time PCR System (Applied Biosystems, Foster City, CA, USA) following the manufacturer's protocols. TaqMan probes for HHEX rs7923837 (C__31982553_10) and BCL6 rs2590438 (C___1699097_10) were purchased from Applied Biosystems (Foster City, CA, USA). Confocal Microscopy PBMCs (2.5 × 10 5 ) of 25 MS patients and 12 controls were plated for 1 h at 37 • C on dishes coated with polyornithine (20 µg/mL, P4957, Sigma Aldrich, Bremen, Germany) and fixed for 15 min with 4% paraformaldehyde at room temperature. Cells were stained with rabbit anti-human HHEX (HPA055460, Atlas Antibodies, Bromma, Sweden) as primary antibody, and a combination of biotinylated anti-rabbit antibody (BA-1000, Vector Laboratories, Newark, CA, USA) and streptavidin-Alexa Fluor 555 (S32355, Invitrogen, Waltham, MA, USA) supplemented with Draq5 (ab108410, Abcam, Cambridge, United Kingdom). Coverslips were mounted in fluorescent mounting medium (DAKO) and photographed with an Olympus FV3000 Confocal Laser Scanning Microscope. Images were quantified with Fiji software, an open-source image processing application [28]. In order to quantify the relative amount of nuclear HHEX compared to the cytoplasmic, the raw integrated intensity of the fluorescence signal was calculated for the whole cell, and another was calculated for the area in the cell overlapping the nuclear dye Draq5. The cytoplasmic HHEX signal was calculated by subtracting the nuclear signal (Draq5 + HHEX) from the whole cell signal. Finally, the ratio between the nuclear and cytoplasmic signal was calculated. Flow Cytometry Excess PBMCs from the metabolic assay were stained the same day with Mitotracker Green (M46750, Invitrogen, Waltham, MA, USA) following the manufacturer's instructions. Pairs of unstimulated and PHA stimulated samples (25 MS patients and 14 controls) were subsequently stained with anti-CD3-PE (clone HIT3a) and anti-CD20-APC (clone 2H7) to determine lymphocyte subpopulations and 7-AAD to exclude non-viable cells (Biolegend, San Diego, CA, USA) and were acquired immediately in a CytoFLEX cytometer (Beckman Coulter, Brea, CA, USA). Cells were gated according to surface markers: CD3 + CD20 − (T lymphocytes), CD3 − CD20 + (B lymphocytes), and CD3 − CD20 − (mostly NK cells). Mitotracker Green is a fluorescent dye that binds the mitochondria, so the amount of green fluorescent signal in each cell is correlated with the mitochondrial mass of the cell. Therefore, the median fluorescence intensity (MFI) of the Mitotracker Green signal was measured in every experimental group and for each lymphocyte subpopulation. The ratio of the MFI signal between PHA stimulated and unstimulated sample pairs was calculated to quantify the increase in mitochondrial mass after cell activation. Data were analyzed with Kaluza 2.1 software (Beckman Coulter, Brea, CA, USA). Statistical Analysis Normality was assessed with the Shapiro-Wilk test for datasets with less than 30 data and with the Kolmogorov-Smirnov test for datasets with 30 or more entries. Outliers were detected and discarded with Grubbs' test (GraphPad online tool, https://www.graphpad. com/quickcalcs/Grubbs1.cfm, accessed on 1 April 2022). In analyses that conformed to the normal distribution, comparisons were performed with a Student's t-test and ANOVA test, with Welch's correction for samples with unequal variances. Those variables that did not follow a normal distribution were compared with Mann-Whitney U and Kruskal-Wallis tests. A standard p value of 0.05 was set for significance in all cases. Statistical analyses were performed with SPSS v15.0.1 (Chicago, IL, USA) and graphical representations were carried out with GraphPad v5.01 (San Diego, CA, USA). Funding: This work was supported by the projects PI16/01259 and PI20/01634, integrated in the Plan Nacional de I+D+I, AES 2013-2016 and 2017-2020, funded by the ISCIII and co-funded by the European Regional Development Fund (ERDF) "A way to make Europe". AGJ holds a Formación de Profesorado Universitario contract (FPU20/03387) from Ministerio de Ciencia e Innovación. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Hospital Clínico San Carlos (CE16/211-E and CE20/740-E_BC). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data that support the findings of this study are available upon request from the corresponding author.
5,786.2
2022-07-01T00:00:00.000
[ "Medicine", "Biology" ]
CALLISTO-SPK: A Stochastic Point Kinetics code for performing low source nuclear power plant start-up and power ascension calculations This paper presents the theory and application of a code called CALLISTO which is used for performing NPP start-up and power ascension calculations. The CALLISTO code is designed to calculate various values relating to the neutron population of a nuclear system which contains a low number of neutrons. These variables include the moments of the PDF of the neutron population, the maturity time and the source multiplier. The code itself is based upon the mathematics presented in another paper and utilises representations of the neutron population which are independent of both space and angle but allows for the specification of an arbitrary number of energy groups. Five examples of the use of the code are presented. Comparison is performed against results found in the literature and the degree of agreement is discussed. In general the agreement is found to be good and, where it is not, plausible explanations for discrepancies are presented. The final two cases presented examine the effect of the number of neutron groups included and finds that, for the systems simulated, there is no significant difference in the key results of the code. 2017 The Authors. Published by Elsevier Ltd. This is an openaccess article under the CCBY license (http:// creativecommons.org/licenses/by/4.0/). Introduction In a nuclear system, each neutron will have a finite probability of causing a fission, being absorbed, escaping the system and so on. It is not possible to know in advance or simulate what fate will befall any given neutron. In addition, the exact time at which a source releases neutrons is not known in advance even if its average intensity is known. In most cases where nuclear systems are being discussed, such as in nuclear power stations under normal operational conditions, there are very large numbers of neutrons present and the change in the total number of neutrons over time may be well approximated by analysing the expected behaviour of this large population of neutrons. This is because the law of large numbers means the overall behaviour in such a case tends towards the mean behaviour. However, in systems with smaller populations of neutrons, the number of neutrons will not be predictable and repeating identical initial conditions may lead to different results. This is relevant in the case of reactor start-up where it is important to ensure the probability of an undesired stochastic transient is sufficiently low for the start-up process to be classed as safe. There are several methods which have been proposed and, generally, each tends to have strengths and weaknesses. The CALLISTO Stochastic Point Kinetics code has been constructed based upon the work presented in Williams and Eaton (2017), which provides an in-depth analysis of the physics and mathematics represented here. This code is able to simulate an arbitrary number of prompt neutron energy groups and delayed neutron precursor groups with no spatial or angular dependence. It contains multiple modules designed to calculate the moments and the generating function and its derivatives of the population density function of the number of neutrons in neutron energy groups of interest, the maturity time of the system, the k inf and reactivity of the system and the source multiplier. This paper summarises the mathematical model contained in CALLISTO code before presenting various example results produced using the code in order to validate or verify the code or to examine the physics simulated by the code. Mathematical symbols not defined locally in the text are defined in Appendix B. CALLISTO calculations Within CALLISTO variables, such as the cross-sections, probabilities of a fission producing different numbers of neutrons and the delayed neutron fractions are defined by the user and may each https://doi.org/10.1016/j.anucene.2017.11.022 0306-4549/Ó 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). be functions of time. For a given time-dependent description of a system, CALLISTO may perform any one of a number of different calculations, such as calculating the k inf or reactivity of the system. This section describes some of the calculations which may be conducted by CALLISTO. This section aims only to summarise the general method as this has been previously discussed in more detail in Williams and Eaton (2017). If more detail on the methods employed or their justification are desired, that paper should be consulted. k inf and reactivity Both the k inf and reactivity values of the system in question at a given time are calculated within CALLISTO as a function of time by solving the eigenvalue equation using the power iteration method. The value of k inf rather than k eff is relevant for the systems presented as the systems are considered infinite in extent. Moments of the probability density function of the number of neutrons in the system To calculate the mean and standard deviation of the number of neutrons in the system at a given time t the system of equations presented in Appendix A.2 are solved. This solution is performed backwards in time with the variable s representing the time variable which is being advanced backwards. When s ¼ 0 the solution is complete. For instance, the variable N S ðt; UjsÞ represents the mean number of neutrons present in the system in energy range U at a time t due to neutrons released by a source in the time period between time s and time t. As such, by reducing s from t to zero the total number of neutrons present in the system at time t due to neutrons produced by sources since time 0 can be obtained. This approach means that the calculation of the moments of the PDF at each value of t is independent of the calculation of the moments of the PDF at every other value of t. Generating functions and derivatives of the probability density function of the number of neutrons in the system The generating function of the PDF of the number of neutrons in the system in energy range U at a time t due to neutrons released between a time s and a time t is defined by: where z is the generating function variable and Pðn; t; UjsÞ is the probability of there being n neutrons in the system in energy range U at a time t due to neutrons released between a time s and a time t. To calculate this value and its first and second derivatives with respect to z at a given time t the system of equations presented in Appendix A.1 are solved. This solution is performed backwards in time with the variable s representing the time variable which is being advanced backwards. When s ¼ 0 the solution is complete. This approach means that the calculation of the generation function and derivatives at each value of t is independent of the calculation of the generating function and derivatives at every other value of t Maturity time The maturity time is the time at which the rate of change of the RSD of the number of neutrons in the system becomes small as the RSD approaches an asymptotic value. This is also the time at which the RSD in the number of neutrons in the system first becomes close to the RSD of the number of delayed neutron precursors in the system, as demonstrated in Fig. 1. What is deemed ''small" or ''close" is not uniquely defined meaning the maturity time is not uniquely defined and is to some degree subjective. The maturity time is found by solving the equations presented in Appendix A.2 with the appropriate final conditions for different values of t until the time t mat is found such that the following equation is satisfied: where mat is the maturity time convergence criterion, which is set to a value of 1  10 À5 for the calculations performed in this paper. RSD neutron ðtÞ is the RSD of the number of neutrons in the system at time t and RSD precursor ðtÞ is the RSD of the number of delayed neutron precursors in the system at time t. The choice of mat effectively selects what is considered ''close" in terms of the difference in the RSDs of the neutrons and delayed neutron precursors. We may examine Fig. 1 in a little more detail. When we consider the state of the system after a very short period of time dt the only event that may have occurred is the release of a single neutron from a source disintegration. This means the PDF of the number of neutrons in the system is given by: Pð1Þ ¼ Sdt; As a result, both the mean and the variance of the number of neutrons in the system is equal to Sdt, meaning the RSD is equal to 1 ffiffiffiffiffi Sdt p . A similar argument may be applied to the precursor population, but here the mean and variance will be proportional to dt 2 as both a source emission and a fission must occur to produce the first precursor. It follows that the precursor RSD will be proportional to 1 dt . In the middle region of this figure it is helpful to recall that, as the system is delayed super-critical (i.e 0$ < q < 1$), any chain of prompt neutrons will die out on a timescale equal to a fairly small number of prompt neutron lifetimes. As a result, the neutron population is made up of short-lived chains of prompt neutrons caused by the release of a neutron from a source or the decay of a delayed neutron precursor (with the latter becoming progressively more important as the number of delayed neutron precursors increases). While one of these chains persists it may cause a large number of Fig. 1. The RSDs of the neutron and delayed neutron precursor populations for a sample system which becomes critical at t $ 120 s and prompt super-critical at t $ 350 s (with the system having no neutrons or precursors present at t ¼ 0 and source which begins releasing neutrons at t ¼ 0). Note that the neutron population refers to prompt neutrons created in a fission, delayed neutrons produced by the decay of delayed neutron precursors or neutrons released in source disintegration. neutrons to be present in the system (of the order of the tens or hundreds of neutrons at its peak). Near the start of the middle region, the mean neutron population will be low. This corresponds to there being a small probability of a non-zero number of prompt neutron chains being present in the system. This means that there is a high probability that there will be no neutrons present and a small probability of there being a larger number of neutrons present (of the order of the tens or hundreds of neutrons that a single chain can cause). For a given specified history of delayed neutron precursors the number of chains of neutrons present will be represented by a Poisson distribution with a mean of: where s chain is the duration of a single chain of neutrons. In reality this will vary from chain to chain but we will assume it constant for simplicity here. The assumption of a Poisson distribution is valid if the source intensity, number of delayed neutron precursors and reactivity do not change on timescales smaller than s chain . Thus, the RSD of the number of chains in a given realisation of the system is given by 1 ffiffiffiffiffiffiffiffi n chain p and this is proportional to the RSD of the number of neutrons. This represents the RSD in the number of neutrons in a given realisation but, as different realisations will have different numbers of delayed neutron precursors, this is only one contribution to the RSDs of the ensemble of realisations. The remainder is due to the RSD in the numbers of delayed neutrons precursors across realisations. As time increases the number of delayed neutron precursors will increase, causing the probability of there being zero neutron chains present to drop and the probability of multiple chains being present to increase. This causes the RSD to decrease as the mean number of neutrons chains present increases. In qualitative terms, the RSD of the delayed neutron precursors may be expected to be lower as the lifetime of delayed neutron precursors is much longer (0.1 s-100 s). This causes a much smaller RSD in the number of delayed neutron precursors as their population is not varying so rapidly or to such extremes as the population of neutrons. At later times the RSD of both the neutrons and precursors tends towards the same steady value. This is because the component of the RSD of the neutron population relating to uncertainty due to the different number of neutrons chains for a given number of precursors becomes small as n chain increases. This means the only component of the RSD remaining is that of the number of delayed neutron precursors themselves. To examine this mathematically, we use the fact that, for a highly multiplying medium, the PDF takes the form of a gamma PDF (Radkowsky, 1964, p. where g ¼ hni 2 r 2 . We observe that, as t ! 1; g ! g 1 (a constant). Thus we may write (7) as: Similarly, the PDF of the precursor population in precursor The variance of the PDF is: Setting x ¼ r 2 n hnðtÞi 2 , we find that Eq. (9) becomes: Thus, the RSD rn hni is the same for all self-similar PDFs including the neutron population and the delayed neutron precursor populations. As such, the maturity time may be seen as the time at which these populations may be considered self-similar. Source multiplier The source multiplier is the factor by which the source must be multiplied such that the probability of the number of neutrons present in the system at the maturity time (found with the original source) is less than or equal to the mean number of neutrons present at this time in the system with the original number of neutrons is equal to a given probability Q. To illustrate this, consider the probability that the neutron population is less than n à : If n à is the mean number of neutrons in the system at time t mat with the unmodified source then the source multiplier may be thought of as the factor the source must be multiplied such that the probability calculated in Eq. (11) is equal to some prespecified value Q. To put it mathematically: This value may be used in reactor design in order to account for the effects of a low number of neutrons in a system on safety. To demonstrate this, consider a case where a reactor is being switched on with a linear increase in reactivity in the presence of a source of a particular intensity. In this case a power peak may be formed as the neutron population increases until its increase is prevented by changes in reactivity brought about by the control system or negative feedbacks in the system due to the high neutron population. The timing of the first power peak is non-deterministic due to the low population of neutrons and, in this case, the later the power peak the larger it will be. This effect is discussed at length in Cooling et al. (2016). In the above case, it may found through deterministic analysis that, during a reactor start-up, a particular combination of reactivity profile and source intensity produces a mean number of neutrons as a function of time which is on the limit of what is considered safe in terms of the power of the system. However, in a given realisation of the system, the actual number of neutrons present is not necessarily equal to the number predicted in the deterministic model. We proceed by considering the number of neutrons present in a specific realisation of the system at the maturity time (at which time the neutron population is expected to be large enough to be considered deterministic), which is before the power peak. If the number of neutrons in this realisation is lower than the mean number of neutrons at that time, the power peak will be later and larger than the deterministic case and, thus, would be unsafe. As such, it is desirable to limit the probability of such a realisation occurring to being below a prescribed value. To achieve this, the source which produced the just-safe power peak in the deterministic analysis can be increased by a factor equal to the calculated source multiplier relating to the desired probability. Increasing the source intensity changes the PDFs such that the peak power will occur at an earlier time, thus it reduces the probability of a dangerous peak power at a later time. To calculate this value within CALLISTO, the maturity time is first calculated as described in Section 2.4. Next, the mean number of neutrons present at that time is calculated as described in Section 2.2. Next, a number of different values of the generating function variable z are sampled until the one which corresponds to the desired probability Q at the maturity time is found, utilising the equations in Appendix A.1 and A.3. The number of neutrons this corresponds to, n prob , is found and the mean number of neutrons is divided by this number to find the source multiplier. Hurwitz curves Hurwitz Curves are named after their appearance in Hurwitz et al. (1963). These curves display the source multiplier as a function of the related probability Q (see Section 2.5). To calculate the relevant values, the maturity time is first calculated as described in Section 2.4. Next, the mean number of neutrons present at that time is calculated as described in Section 2.2. Next, a number of different values of the generating function variable z are sampled and the related probabilities Q and the number of neutrons this corresponds to n prob at the maturity time are found, utilising the Equations in Appendix A.1 and A.3. For each value of z the relevant source multiplier is found by dividing the mean number of neutrons by the related value of n prob . Finally, the source multiplier is plotted as a function of Q. Results Within this section, different calculations are performed by CALLISTO. In some cases these will be compared to results in the literature or results produced by other codes. In other cases the results explore the capabilities of the code or the implications regarding the underlying physics and models. Hurwitz et al. (1963) provide a number of Hurwitz curves for several reactivity ramp insertions. These curves describe the source multiplier as a function of the corresponding value of Q. Two of those curves are represented here. The transients which produced these curves are characterised by the ratios R k 1 (the ratio of the reactivity ramp rate to the decay rate of the single delayed neutron precursor group) and S k 1 (the ratio of the source rate to the decay rate of the single delayed neutron precursor group). The first case selected has a relatively low reactivity ramp rate, whilst the second has a relatively fast reactivity ramp rate. Both have the same source intensity. Hurwitz curves The data for these two cases are given in Table 1. Some data is duplicated directly from Hurwitz et al. (1963) whilst other values, which are not given in that paper, are approximations. The results are not expected to be particularly sensitive to the values which have been approximated. The results of these two cases are displayed in Fig. 2. The corresponding results from Hurwitz et al. (1963) are also shown for comparison. The agreement for Case 1 is very good and Case 2 is fair. One possible explanation for the deviations in Case 2 is that Hurwitz's model assumed that prompt neutrons have a negligible lifetime whilst CALLISTO does not make that assumption. This is more important in Case 2 as the transient is faster and so the reactivity is changing on a timescale that is closer to that of the duration of a burst of prompt neutrons, rendering the behaviour of prompt neutrons more important. Multi-group neutron populations This case is based upon Myers (1995) which presents a number of stochastic calculations of an infinite extent of 93% enriched uranium utilising four neutron energy groups. The neutronics parameters of the system are invariant in time. The calculation neglects the effect of delayed neutron precursors. A summary of the data used is presented in Table 2. Some of this data was not present in Myers (1995) in numerical form but, instead, was present in bar charts from which the data was read manually. This may result in some small differences between the data used here and the data used in the calculations performed by Myers. The value of k inf , as expected for a static infinite slab of high enrichment uranium, is very high and is also independent of time. The exact value is 2.22408987 as calculated by CALLISTO. A corresponding value does not appear to be given by Myers for comparison. The mean and standard deviation of the neutron population in energy group 1 as a function of time as calculated by CALLISTO are shown in Fig. 3. The source intensity in question is 100n/s and the time simulated is 1  10 À7 so, in the majority of cases, Table 1 The variables used to define the data for the cases presented in Section 3.1 and selected derived quantities. R and S refer to the reactivity ramp rate and source intensity respectively. Hurwitz curves generated by Case 1 and Case 2 of the systems described in Section 3.1. The data series entitled ''Hurwitz" were found by reading data from graphs within Hurwitz et al. (1963) and so there is some error attributed to these data series from this process. no neutrons will have been released. However, due to the high reactivity, in the instance that a neutron has been released, the population will grow very quickly. This means the probability distribution of the number of neutrons in the system will contain a sharp peak at zero neutrons and a long tail extending to a large number of neutrons. For this reason, the mean number of neutrons is low and the standard deviation is much higher than the mean. The mean and standard deviation of the number of neutrons present in the system as a function of time due to a single neutron injected into energy group g at t ¼ 0 is presented in Fig. 4. This again shows how a single neutron injected in the system will give rise to a chain of neutrons which rapidly increases in number. Myers (1995) contains many plots of different variables as a function of time and many of these are comparable to data produced here. However, the data presented is in terms of a ''Non-Dimensional Time t l " which is not simply converted into a dimensional time as a value for l does not appear to be given by Myers. Additionally, no instances where a source is present in the infinite medium are discussed. However, some comparisons may still be made between Myers' results and simulations performed by CALLISTO. One comparison which may be made is that of the ratio between the mean number of neutrons present in energy group 1 at a given time (once the system has attained a state of exponential growth) for neutrons injected into different energy groups at t ¼ 0. Another ratio that may be compared is the standard deviation divided by the mean for the number of neutrons in energy group 1 for a neutron of a particular energy group inserted at t ¼ 0 The results of these comparisons may be found in Table 3. It should be noted that the ratios from Myers are obtained by manually reading data from a logarithmic axis in a document reproduced from microfilm so there is a considerable uncertainty in the estimate here of the value obtained by Myers. Overall, the results of Table 3 show similar overall trends but the exact agreement is patchy, with some results being very close and others differing significantly. As noted, obtaining results from Myers (1995) was an imprecise endeavour and this plausibly makes up for a considerable amount of this variation, although it is difficult to quantify this effect. The results of the third calculation provide the generating function of both the total number of neutrons present in the system due to the source and also the number of neutrons present in the system as a function of time following an insertion of a single neutron of energy group g at t ¼ 0. These results are shown in Fig. 5. Before considering these results, it is useful to recall the definition of the generating function relating to the number of neutrons present at time t due to source operation since time s and its derivatives: where P S ðn; t; UjsÞ is the probability of there being n neutrons in energy range U in the system at a time t given a source that has been present since time s. It is also useful to recall the generating function of the number of neutrons present at time t due to a single neutron injected at time s in energy group g: e G g ðz; t; UjsÞ ¼ 1 À X 1 n¼0 z n P g ðn; t; UjsÞ; where P g ðn; t; UjsÞ is the probability of there being n neutrons in energy range U in the system at a time t resulting from a single neutron injected in energy group g at time s. For this case, we have Table 2 The variables used to define the data for the case presented in Section 3.2. Fig. 3. The mean and standard deviation of the number of neutron present in the system described in Section 3.2 as a function of time. selected a value for the generating function variable z of 0 and so these simplify to: G S ð0; t; UjsÞ ¼ Pð0; t; UjsÞ; ð17Þ G 0 S ð0; t; UjsÞ ¼ Pð1; t; UjsÞ; ð18Þ G 00 S ð0; t; UjsÞ ¼ 2Pð2; t; UjsÞ; ð19Þ e G g ð0; t; UjsÞ ¼ 1 À P g ð0; t; UjsÞ: In Fig. 5a, the value of G S ð0; t; Uj0Þ reduces from 1 to 0.99825 over the 2  10 À7 s over which these values are calculated, implying that, at t ¼ 2  10 À7 s, Pð0Þ ¼ 0:99825. This matches up well with the fact that source intensity is 1  10 4 n/s. One would expect the probability of zero neutrons being present in a system with time-independent neutronics parameters and with a very short prompt neutron lifetime to be given by: where P E is the extinction probability for a neutron injected from the source. Solving this equation for t ¼ 2  10 À7 s gives an estimate of 0.124 for the extinction probability. We will go on to see that this is indeed a good approximation. Fig. 5b also gives information regarding the survival probability. Recalling that e G g ð0; t; Uj0Þ ¼ 1 À P g ð0; t; Uj0Þ, we see that the asymptotic value of e G g ð0; t; Uj0Þ should give the survival probability for a neutron injected at t ¼ 0 into group g. Similar data is also available in Myers (1995) and this comparison is tabulated in Table 4. Again, issues relating to the difficulty of extracting data from the graphs present in Myers (1995) causes a significant uncertainty in the values in this column. The agreement between Myers' results and CALLISTO's result in Table 4 is poor. However, note that the extinction probability estimated in the discussion of Fig. 5a of 0.124 (recalling the source injects neutrons only in group 1) is close to that of the CALLISTO estimate in Table 4 of 0.140. We may form another estimate of the extinction probability which is actually a lower bound on it by considering the crosssections given in Table 2. We may do this by considering the probability that a neutron in a given energy group will result in a fission. This may be achieved with the following equations: Fig. 4. The mean and standard deviation of the number of neutrons present in the system described in Section 3.2 as a function of time due to a single neutron injected into energy group g of the system at t ¼ 0. Table 3 Ratios of different moments of the number of neutrons in energy group 1 during the exponential growth phase following the injection of a neutron into the system described in Section 3.2. ni the mean number of neutrons present in the system after a neutron is injected into energy group i at t ¼ 0 and ri represents the standard deviation in the number of neutrons present in the system after a neutron is injected into energy group i at t ¼ 0. Note that difficulty in reading the graphs in which the data is presented in Myers (1995) contributes a significant potential error to results in this column. Ratio CALLISTO Myers (1995) 5. The generating functions of the system described in Section 3.2 as a function of time due to either the source or due to a single neutron injected into energy group g of the system at t ¼ 0. All results use z ¼ 0 where R tot;g is the total cross-section of energy group g and p f ;g is the probability that a neutron in energy group g eventually causes a fission. Due to the fact that only down-scatter is present we may easily solve this set of equations to obtain: This provides two pieces of information. Firstly, it provides a lower limit on the extinction probability for each group. If the neutron that is present at t ¼ 0 does not cause a fission then it cannot sponsor a persisting chain. This leads to the relation: where P g ðEÞ is the extinction probability of a neutron injected in group g. As can be seen in Table 4 the results produced by CALLISTO obey this inequality whilst the results obtained from Myers (1995) do not for cases where the neutron is injected in groups 1 or 2. Given the high probability that a neutron will cause a fission, the number of neutrons produced per fission it is reasonable to expect that the true extinction probability will be similar to this lower limit. Again, the CALLISTO simulations are consistent with this expected result. The second piece of information that may be gained from the values of p f ;g relates to the ratios between them. Once a neutron has caused a fission it will produce a number of neutrons as determined by the values of p f ;g;m . We define P E;fission;g to be the probability that a fission caused by a neutron originally injected in group g (noting that it may be in a different energy group when it actually causes a fission) does not produce a chain of neutrons that persists. We thus form the relation: P E;g ¼ 1 À p f ;g ð1 À P E;fission;g Þ: Although P E;fission;g is not easily available we may rearrange and take the ratio of this expression for neutrons of different energy groups to obtain the expression: We tabulate these ratios for both the results produced by CAL-LISTO and Myers (1995) in Table 5. We note here that, as the fission spectrum is identical for each fission regardless of the energy group of the neutron that caused it, the difference between the values of P E;fission;g must arise from the different fission multiplicities for fissions caused by neutrons of each energy which derive from the values of p f ;g;m in Table 2. A neutron injected in group g is most likely to cause a fission in that group as opposed to any other so it follows that the more likely it is that a fission releases a larger number of neutrons the more likely the chain is to survive. Faster fissions release more neutrons (for this system a fission caused by a neutron in group 1 will release an average of approximately 2.98 neutrons and a fission caused by a neutron in group 4 will release an average of approximately 2.41 neutrons). As a result we expect P E;fission;g to increase for values of g which correspond to lower energy groups. This behaviour is observed in the CALLISTO results but not the Myers results in Table 5. The analysis of this system using CALLISTO has shown that the results are self-consistent and conform to what may be expected from physical reasoning. The comparison with Myers (1995) has produced mixed results but it has been demonstrated that, where disagreement exists, there is significant evidence to favour the results of CALLISTO. The reason for this discrepancy is unclear. The most obvious explanations are that the system studied by Table 4 The extinction probabilities for neutrons of group g injected into the system described in Section 3.2. Note that difficulty in reading the graphs in which the data is presented in Myers (1995) contributes a significant potential error to results in this column. Energy Group CALLISTO Myers (1995) 1 0 :140 0:03 2 0 :175 0:08 3 0 :210 0:16 4 0 :288 0:23 Table 5 The ratios of the extinction probabilities for a chain following fission caused by a neutron injected in group g of the system described in Section 3.2. The values marked ''N/A" are values for which it as not able to obtain a positive value from Myers (1995 Table 6 The variables used to define the data for the case described in Section 3.3. Variable Value Myers has not been precisely replicated here, the results of Myers could not be read with sufficient accuracy or that there is an error in the code used to produce the results found by Myers. The first two options seem particularly plausible given the poor quality of the available copy of that manuscript. Delayed ramp This case is a single-energy case designed primarily to demonstrate CALLISTO's ability to calculate the source multiplier. It contains a ramp reactivity insertion such that the reactivity is as given by Eq. (31): Á $ for 100 s 6 t < 500 s 2$ for t P 500 s The full specification of the relevant parameters may be found in Table 6. A plot of reactivity against time produced is shown in Fig. 6. The maturity time of the system was calculated to be 378.194 s and the source multiplier of the system for Q ¼ 1  10 À8 was calculated to be 2:41636  10 5 . For comparison, a separate code was used to solve the same equations that are solved by CALLISTO. This code was the code used to produce the data presented in Williams and Eaton (2017). For this case, that code calculated a maturity time of 379.2 s and a source multiplier of 2:40  10 5 . This close agreement helps to verify CALLISTO. The slight differences are likely to be primarily due to small differences in approach. For example, Williams' code takes its input data in a slightly different form (it takes values of v fgn as opposed to p fgm for example) and this may lead to slight differences in the parameters propagated through to the ODEs as the data will have gone through different degrees of rounding and truncation. Sixteen group system This case is intended to explore the multi-group capabilities of CALLISTO. The physically modelled system is that of an infinite mixture of 20% enriched uranium of hydrogen when H:U ratio of 400. The cross-sections of this system are modelled using the sixteen group cross-sections from Hansen et al. (1961) whilst the delayed neutron group data is taken from Wilson and England (2002). CALLISTO calculates that this system has a k inf value of 1.02889, which corresponds to a reactivity of approximately 3.85 $. Into this system a source is placed which has a decay rate of 1000 disintegration/s with each disintegration producing exactly one neutron in the highest energy group. This system may also be collapsed down into a one-group representation using the following equation: where R onegroup is either the absorption or fission macroscopic cross-section for the one-group case and R g is the corresponding cross-section for energy group g in the sixteen group cross-section data-set. / g is the neutron flux in energy group g in the sixteen group representation of the system in the neutron flux eigenvector of the neutron transport equation solved for the eigenvalue relating to the growth mode associated with the k inf of the system. This is calculated by CALLISTO when the k inf of the system is calculated. The equivalent one-group system is thus also used as an input to CALLISTO and CALLISTO calculates the reactivity of this system to be 3.83$, which is close to that calculated for the sixteen group system. CALLISTO may also calculate the maturity time for both representations of the system. For the sixteen group case this is calculated to be 1:189663  10 À2 s and for the one group system this is calculated to be 1:16218  10 À2 s, showing fairly close agreement. The mean and standard deviation of the neutron population as a function of time may also be compared as in Fig. 7. It can be seen here that the agreement is fairly good, although the growth in both the mean and the standard deviation is larger in the one group case than the sixteen group case, despite the sixteen group case having a slightly higher reactivity, as noted previously. This is likely due to the one group case having a different PDF of the generation time of a neutron as neutrons do not need to be slowed down before they have a significant chance of causing fission in the one group case. This also explains the slightly shorter maturity time in the one group case. In fact, we see that the mean and standard deviation of the number of neutrons at the respective maturity times are very similar in the sixteen and one group cases. The mean is 8:485  10 5 and the standard deviation is 6:245  10 6 at t = 1.189663Â10 À2 s for the sixteen group case and the mean is 8:442  10 5 and the standard deviation is 6:311  10 6 at t ¼ 1:16218  10 À2 s for the one group case). Next we may compare the calculated source multiplier for a target probability of Q = 0.1. In the sixteen group case this is 9:39905  10 6 and in the one group case this is 9:53204  10 6 . The fact that these values agree fairly closely helps to validate the multi-group treatment within CALLISTO and to increase Fig. 6. The reactivity as a function of time of the systems described in Section 3.3 Fig. 7. The mean and standard deviation in the number of neutrons present in the system as calculated for the sixteen group and one group representations of the system described in Section 3.4 confidence that correctly collapsing a complex energy group structure to a one group approximation can still produce useful values. A final comparison that may be made is that of the running time of CALLISTO for these two different representations of the system. These data are presented in Table 7. As can be seen, the one group calculations are significantly faster. This is primarily because the inclusion of even a single fast group dramatically increases the stiffness of the equation set as a whole, significantly slowing the solution of the ODEs. As a result, a user may wish to utilise a one group representation of a system in order to decrease the solution time, as the loss in accuracy does not appear to be very large in the simplification to a one group system. However, the system chosen here was deliberately chosen to be solvable in a reasonable amount of time, even with all sixteen energy groups represented. Specifically, the specification of the system ensured that the maturity time was very small, which reduces the amount of time that must be simulated. This has, in turn, placed a lower limit on the value of Q which may be used when calculating the source multiplier. It is not practical to repeat this comparison for a system with a longer maturity time as the sixteen group approximation would take a prohibitively long time to solve. This limits the utility of this result, although Section 3.5 attempts to extend the confidence of the conclusions found here over a larger sets of regimes. Two group system Section 3.4 represents a realistic representation of a system whose material properties are derived from a specific material composition in an infinite configuration. However, the inclusion of high-energy groups greatly increases the stiffness of the problem and prohibited simulating the system for a long simulated time. This, in turn, prohibited choosing a low value for Q. In this section a different, less realistic representation of a system without any high-energy groups is presented with the aim of comparing a two group and one group representation of the system regarding the value of the source multiplier obtained. The details of the system being simulated are presented in Table 8. This data is also collapsed into a one-group representation (with the resultant group having an energy of 0.025 eV) using Eq. (32) and the results are compared. Both systems have a reactivity of approximately 0.5$. Using the two group representation the maturity time is calculated to be 59.35 s and the source multiplier is calculated to be 33.40. For the one group case the maturity time was calculated to be 60.37 s and the source multiplier was calculated to be 33.81 with Q ¼ 1  10 À8 . The agreement of these two values provides a further piece of evidence that the collapse of a multigroup system to a single group can provide an acceptably close agreement at a greatly reduced computational cost where the maturity time is larger and the value of Q is small. The running times are recorded in Table 9. Conclusions This paper has presented the CALLISTO code which is based upon the mathematics formulated in another paper (Williams and Eaton, 2017). The code has been applied to a series of five low neutron source verification test cases. In three of these systems the results have been verified or validated against other methods or papers. In two cases the agreement was good, whilst the third case showed significant differences. However, these differences could be at least partially explained and it was demonstrated that CALLISTO appeared to be producing plausible and self-consistent results. In the fourth system it was demonstrated that, for this system, the reduction in the number of energy groups from sixteen to one did not produce large changes in the results produced by CALLISTO in terms of the reactivity, maturity time, neutron population or source multiplier. In the fifth system a reduction in the number of energy groups was reduced from two to one and it was found that this reduction did not significantly change the maturity time or source multiplier. This provides supporting evidence that a single energy group is sufficient to simulate the effects of a low neutron population in these types of calculations. Both the fourth and fifth cases are consistent with the fact that one-group models of fast burst systems such as GODIVA and CALIBAN are found to provide good comparisons with the available experimental data. The work performed here could be extended by the introduction of spatial and angular dependencies for the neutron population (Williams and Eaton, 2018). This would allow for the direct simulation of more complex systems. Table 7 The time taken to perform different calculations for the two representations of the system described in Section 3.4. These calculations were all performed on the same computer and include overheads such as reading inputs and creating outputs. These overheads should only make a significant contribution to the one group calculations. Calculation Sixteen group One group Mean and Standard Deviation (Fig. 7 3.488824/s Table 9 The time taken to perform different calculations for the two representations of the system described in Section 3.5. These calculations were all performed on the same computer and include overheads such as reading inputs and creating outputs. These overheads should only make a significant contribution to the one group calculations. : ðA:41Þ To calculate the mean and second moment of the number of neutrons N S ðt; UjsÞ and hNðN À 1Þi s ðt; UjsÞ in energy range U caused Table B.10 Summary of variables and their meanings (Roman letters). Note that a ''0" following a variable name refers to the differential of it with respect to z. Variable Description Definition Characteristic energy of neutron for group g Problem-specific F fg ðsÞ Probability of a fission neutron being in group g Problem-specific F dig ðsÞ Probability of a neutron released by decay of a precursor in precursor group i being in group g Problem-specific F sig ðsÞ Probability of a neutron produced in a disintegration of the ith source being of energy group g Problem-specific G Number of neutron energy groups Problem-specific G fg ðz; U; tjsÞ Generating function for a neutron beginning at time s P 1 n¼0 z n Pðn; t; Ujg; sÞ e Gg ðz; U; tjsÞ Generating function for a neutron beginning at time s in group g for neutrons in the energy range U at time t in group g for neutrons in the energy range U at time t 1 À Gg ðz; U; tjg; sÞ G S ðz; U; tjsÞ Generating function for the neutrons released by a source between time s and time t for the neutrons in the energy range U at time t P 1 n¼0 z n P S ðn; t; Ujg; sÞ mn Mass of a neutron 1:675  10 À27 kg The total number of sources. Problem-specific p fgm Probability of a fission initiated by a neutron of energy group g producing m neutrons Problem-specific Pðn; t; Ujg; sÞ Probability of a single neutron injected at time s producing n neutrons in energy range U at time t Variable P S ðn; t; Ujg; sÞ Probability of the neutrons released by a source between time s and time t producing n neutrons in energy range U at time t Variable p sim ðsÞ Probability of a source disintegration of the ith source producing m neutrons Problem-specific Q ðn prob ; t; UjsÞ Probability that the neutrons released by sources in a time period between time s and time t gives rises to less than n prob neutrons in energy range U Pn prob n¼0 P S ðn; t; UjsÞ S i ðsÞ The disintegration rate of the ith source Problem-specific tmat The maturity time: the time at which the RSD of the number of neutrons in the system rSðt;UjsÞ RagðsÞ Macroscopic absorption cross-section for group g User defined Rs;g!g0 ðsÞ Macroscopic scattering cross-section from group g to g 0 User defined RsgðsÞ Macroscopic scattering cross-section from group g P G g 0 ¼1 Rs;g!g0 ðtÞ v fgn ðsÞ The nth neutron multiplicity (prompt only) of fission caused by neutrons in energy group g Pm max;f m¼n m! ðmÀiÞ! p f m v sin ðsÞ The nth neutron multiplicity of the ith source P mmax;si m¼i m! ðmÀiÞ! p sm
11,186
2018-03-01T00:00:00.000
[ "Engineering", "Physics" ]
Post-Lie Algebras and Isospectral Flows In this paper we explore the Lie enveloping algebra of a post-Lie algebra derived from a classical $R$-matrix. An explicit exponential solution of the corresponding Lie bracket flow is presented. It is based on the solution of a post-Lie Magnus-type differential equation. Introduction Isospectral flows and the corresponding Lax type equations play an important role in the theory of dynamical systems, both in finite and infinite dimensions. See [1,12]. They appear together with a large supply of conserved quantities for the original dynamical system. In the finite-dimensional case, i.e., for systems with a finite number of degrees of freedom, the Lax representation may correspond to the Hamiltonian representation of the dynamical system in terms of Euler-type equations on the coadjoint orbits of a suitable Lie group G. Writing g * for the dual of the Lie algebra g corresponding to G, recall that if H ∈ C ∞ (g * ) is a Hamiltonian of the dynamical system then the corresponding Hamiltonian equations, written with respect to the canonical linear Poisson structure {·, ·} g , take the following forṁ α = − ad dHα (α), (1.1) where α ∈ g * . In this description the Casimir functions provide only trivial first integrals with respect to the bracket {·, ·} g . The existence of a map R ∈ End(g), which satisfies the so-called modified classical Yang-Baxter equation [17] [ Furthermore, under the assumption that g is endowed with a non-degenerate, g-invariant bilinear form (·|·), the previous equation can be written as the following Lie bracket flow equatioṅ (1.5) where x α ∈ g is the (unique) element such that (x α |y) = α, y , for all y ∈ g. In many interesting cases [1,8,11,19,22], the existence of such an R-matrix turns out to be equivalent to a decomposition of the Lie algebra g = g + ⊕ g − , where g ± are two Lie subalgebras of g. To any such decomposition corresponds a (local) decomposition of the corresponding Lie group G G + × G − , where in this case means a local diffeomorphism from a neighborhood of the identity e ∈ G to a neighborhood of the identity (e, e) of the (product) Lie group G + × G − . Regarding equation (1.4), or equivalently equation (1.5), the following factorization theorem holds true. See the above references for details and background. 17]). Let α 0 ∈ g * , let H be a Casimir function of (g * , {·, ·} g ). Let g ± (t) be two smooth curves in G, such that: (a) g ± (t) ∈ G ± for all t for which they are defined, (b) g ± (0) are both equal to the identity e ∈ G, (c) they give a unique solution of the following factorization problem (1.6) at least for |t| < , with > 0. Then the curve t α(t), α(0) = α 0 , given by The previous theorem connects a certain factorization of elements in the Lie group G with the solution of Lie bracket flow equations in the corresponding (dual, g * , of the) Lie algebra g. The main aim of this work is to explore this result in the framework of the Lie enveloping algebra of a post-Lie algebra defined on g in terms of an R-matrix. A post-Lie algebra [3,10,14,21] consists of a vector space V equipped with two Lie brackets [·, ·] and ·, · as well as a non-commutative and non-associative product : V ⊗ V → V , such that the following identity holds (1.7) Further below we will state the precise definition and relations that characterize such an algebraic structure. In light of (1.2) and (1.3), the third product on the Lie algebra (g, [·, ·]) is given in terms of the R-matrix map [2], x y := [ 1 2 (R + id)(x), y], x, y ∈ g. Lifting the post-Lie algebra to the Lie enveloping algebra of the Lie algebra (g, [·, ·]) allows us to define another associative product on U(g), which is compatible with the latter's coalgebra structure [10]. The resulting Hopf algebra is isomorphic -as a Hopf algebra -to the Lie enveloping algebra of the second Lie algebra (ḡ, ·, · ). As a result U(g) is equipped with two natural exponential maps, and the relation between those and the corresponding Lie groups in U(g) is captured through a Magnustype differential equation. This gives rise to explicit solutions of the factorization (1.6) in Theorem 1.1. We close this introduction with two remarks. First we would like to mention that differential geometry is a natural place to look for examples of post-Lie algebras. Indeed, a Koszul connection yields a R-bilinear product on the space of smooth vector fields X (M) on a manifold M. Flatness and constant torsion together with the Bianchi identities imply relation (1.7) between the Jacobi-Lie bracket of vector fields, the torsion itself, and the product defined in terms of the connection [14]. Second, we would like to stress that the formalism introduced in this note is based on the theory of classical R-matrices, and will be applied only in the context of classical dynamical systems, whose descriptions are given in terms of isospectral flows. However, saying this, it is worth mentioning that a similar formalism was used in [18] and [16] to the study of quantum groups and quantum integrable systems, see also Remark 4.12 below. Outline of the paper. Sections 2 and 3 contain several preliminary results, which are crucial for the main statements of this paper, to be found in Sections 4 and 5. More precisely, in Section 2 we quickly recall the basic notions of R-matrices and their relations to factorization problems already mentioned above. Section 3 collects some basic facts about Lie-admissible algebras and post-Lie algebras. In particular, it is shown that solutions of the modified classical Yang-Baxter equation yield a post-Lie algebra structure on the original Lie algebra g. In Section 4, after recalling some important result on the universal enveloping algebra of a post-Lie algebra, we state a factorization theorem for the generators of a group G * sitting inside the universal enveloping algebra of any Lie algebra endowed with a solution of the modified classical Yang-Baxter equation. Furthermore, a distinguished linear isomorphism is defined between the universal enveloping algebra U(g) of a Lie algebra g supporting an R-matrix, and the universal enveloping algebra U(ḡ) of the corresponding double Lie algebra. Moreover, it is shown that by swapping the associative product of U(g) for a new product defined by extending the post-Lie product defined on g in terms of the R-matrix to U(g), such a linear isomorphism becomes a morphism of associative algebras. Finally, in Section 5 the post-Lie algebra structure is invoked to show how the BCH-recursion follows as the solution of a Magnus-type differential equation. This is then applied to Lie bracket flows. It is shown that the solution of a Lie bracket flow on a Lie algebra g endowed with a solution of the modified classical Yang-Baxter equation can be described, under suitable convergence assumptions, in terms of the generators of the group G * . In the following the ground field K is of characteristic zero, and K-algebras are assumed to be associative and unital, if not stated otherwise. As mentioned in the introduction, solutions of (2.1) are intimately related to factorizations of the Lie group G, see [11] for details. For simplicity we assume that π + is a projector -which is covering many interesting cases. The subgroups corresponding to the Lie subalgebras g ± are denoted G ± . For a 0 ∈ g and a small enough t, that is, in a sufficiently small neighborhood of the unit of the group G corresponding to g, the following unique factorization holds with g ± (t) ∈ G ± for all t for which they are defined. Define the map and recall that it satisfies the Lie bracket initial value probleṁ Since π ± are projectors we obtain g −1 Anticipating what follows below, we remark that in [6] the function Ω(t; a 0 ) was described in terms of a Magnus-type differential equation, such that g + (t) = exp(Ω(t; a 0 )). In this paper we will describe this Magnus-type differential equation using a post-Lie algebra, and thereby clarify its link to (2.3) by showing that its solution is given in terms of the BCH-recursion [9]. 3 Lie-admissible algebras, post-Lie algebras and R-matrices , not necessarily associative, is called Lie-admissible if the commutator [a, b] := a · b − b · a defines a Lie bracket. In this case, the corresponding Lie algebra (A, [·, ·]) will be denoted by A Lie . Note that associative K-algebras are Lie-admissible. Another class of Lie-admissible algebras is introduced in the following definition. Definition 3.2. The algebra (A, ) with binary product : A ⊗ A → A will be called a (left) pre-Lie algebra, if for all x, y, z ∈ A a (x, y, z) = a (y, x, z), where a (x, y, z) := x (y z) − (x y) z is the associator. Pre-Lie algebras are Lie-admissible. Indeed, note that identity (3.1) can be written as where the linear map x : A → A is defined by x (y) := x y and the bracket on the left-hand side is defined by [x, y] := x y − y x. As a consequence it satisfies the Jacobi identity, turning A into a Lie algebra. See [4,13] for more details. We now turn to the definition of post-Lie algebra following reference [14]. ) be a Lie algebra, and let : g ⊗ g → g be a binary product such that, for all x, y, z ∈ g, Then the triplet (g, [·, ·], ) is called a post-Lie algebra. satisfies the Jacobi identity for all x, y ∈ g. The Lie algebra is written as (ḡ, ·, · ). Remark 3.6. Pre-and post-Lie algebras are important in the theory of numerical methods for differential equations. We refer the reader to [4,7,10,13,14] for background and details. In the introduction we pointed at an archetypal example of post-Lie algebra coming from differential geometry. Here we will present an algebraic example using R-matrices. Let π + ∈ End(g) satisfy identity (2.1), and define the following binary product on g: ). The product (3.3) defines a post-Lie algebra structure on g. It turns out that the new Lie bracket defined in terms of this post-Lie product and the original Lie bracket is the one given in (2.2). Indeed, for π + := id −π − , As for item b), one finds that the Lie-admissible algebra (g, ) is defined through the binary composition x y : 2) is then given by x, y := x y − y x, which is just another way of writing (1.3). WithR : 4 Lie enveloping algebra of a post-Lie algebra In [10] the Lie enveloping algebra of a post-Lie algebra was described. Here we recall the basic results without proofs. Let (g, [·, ·], ) be a post-Lie algebra, and U(g) the universal enveloping algebra of the Lie algebra (g, [·, ·]). Recall that U(g) with concatenation product is a noncommutative, cocommutative filtered Hopf algebra generated by g → U(g). The coshuffle coproduct is defined for x ∈ g by ∆(x) := x ⊗ 1 + 1 ⊗ x, i.e., elements of g are primitive. It is extended multiplicatively to all of U(g). We use Sweedler's notation for the coproduct: ∆(T ) =: , see [20]. The counit is denoted by : U(g) → K, and the antipode S : . Finally, remember that the universal property of U(g) implies that if A is an associative algebra and f : g → A Lie is a homomorphism of Lie algebras, then there exists a unique (unital associative algebra) morphism F : We have seen in Proposition 3.5 that the vector space underlying a post-Lie algebra carries two Lie algebras, (g, [·, ·]) and (ḡ, ·, · ), related via the post-Lie product In what follows, (U(ḡ), ·) will denote the universal enveloping algebra of the Lie algebra (ḡ, ·, · ). In the next proposition the post-Lie product is extended to U(g). Theorem 4.1 ([10] ). There is a unique extension of the post-Lie product from g to U(g). On (U(g), ) the product for A, B ∈ U(g) is associative and unital. Moreover, (U(g), * , ∆) is a Hopf algebra isomorphic to (U(ḡ), ·, ∆). i) The last statement in Theorem 4.1 appeared in the context of pre-Lie algebras in [15], and we refer the reader to [10] for details. For the notation, see item (ii) below. ii) In what follows we will be working with three Hopf algebras. We will consider (U(g), ·, ∆), i.e., the universal enveloping algebra of g, and the Hopf algebra (U(g), * , ∆), both defined on the same underlying vector space, whose products are µ · (x, y) = x·y and µ * (x, y) = x * y, respectively. Note that the coproduct ∆ is the same for both, given by the one originally defined on the universal enveloping algebra (U(g), ·, ∆). According to the last statement in Theorem 4.1 this map is an algebra morphism on (U(g), * ). The third Hopf algebra we will consider is the universal enveloping algebra of the Lie algebra (g, ·, · ). Even though this Hopf algebra should be denoted as (U(g), ·, ∆ · ) we stick to a simplified notation, and denote its product and coproduct by · and ∆, respectively. iii) From now on, suitable completions of the above Hopf algebras will be considered, still denoted by the same symbols U(g) := (U(g), ·, ∆, ), U * (g) := (U(g), * , ∆) and U(g) := (U(g), ·, ∆). Then, for any element v ∈ g, one may consider the elements exp · v, exp · v and exp * v. For example, with v * n denoting the n-fold product v * · · · * v, A simple computation shows that each of these elements is group-like in the corresponding Hopf algebra. For this reason one may consider G, G * and G, the groups generated for v ∈ g by the products of the elements of type exp · v, exp * v and exp · v, respectively. Moreover, to simplify notation, we will write exp v and exp · v to denote the exponentials of v in U(g) and U(ḡ), respectively. iv) In what follows we will often need to use the classical BCH-formula. Recall that BCH : g × g → g is defined such that exp x exp y = exp BCH(x, y), and where x, y ∈ g. The reduced BCH-formula is defined by BCH(x, y) := BCH(x, y) − x − y. From a formal point of view the BCH-formula maps elements from g into the completion of U(g). Without further comments, we will therefore assume that it is convergent. However, when working locally we will restrict ourselves to a suitable neighborhood U of the zero element of the Lie algebra g. Beside the exponential maps introduced in (iii), we need the following ordered post-Lie exponential in U(g). Definition 4.3. For any primitive element a ∈ g, the right-ordered exponential exp : From the identity A (B C) = (A (1) (A (2) B)) C, which holds for all A, B, C ∈ U(g), see [10], it follows immediately that in U(g): We now consider these results in the context of a post-Lie algebra which is defined in terms of an R-matrix π + ∈ End(g) satisfying identity (2.1). First recall that any element v ∈ g may be decomposed: v = π − (v) + π + (v) =: v − + v + , and let (g, [·, ·], ) be the corresponding post-Lie algebra with a b := −[π + (a), b]. Then: Proposition 4.4. For every v, w ∈ g the following equality Proof . Note that this result implies that exp * (v) w ∈ g for v, w ∈ g. The next proposition is a natural factorization statement for the generators of the group G * ⊂ U(g), see item (iii) in Remark 4.2. Proposition 4.5. For each v ∈ g the following factorization holds: where v ± := π ± (v), and v v + = 0. Next we show inductively that 1 n! v * n = For n = 2 one finds that: For n > 2 we have Remark 4.6. a) Another way to prove equality (4.2) is to show that both sides solve the same initial value problem. For this we take a local point of view by assuming a small enough t such that exp * (tv) and exp(tv − ) exp(tv + ) both lie in a sufficiently small neighborhood of the unit of the Lie group G corresponding to the Lie algebra g. Then y 1 (t) := exp * (tv) and y 2 (t) := exp(tv − ) exp(tv + ) are solutions oḟ y(t) = y(t) exp(−π + (tv))v exp(π + (tv)) , y(0) = 1. Indeed, From which the statement follows by uniqueness of the solution. b) Returning to item (iv) of Remark 4.2, we find that (4.2) implies In light of Proposition 3.11 in [10], which says that for v ∈ g and sufficiently small enough t exp * (tv) = exp(θ(tv)), , we see that θ(tv) = BCH(tv − , tv + ). If the R-matrix corresponds to a Lie algebra that splits into a direct sum of vector subspaces, g = g + ⊕ g − , then in addition to v v + = 0, we have that v − v − = v + v + = 0. This implies the next result. Corollary 4.7. In the case of an R-matrix π + which is a projector on g, we find that The result in Proposition 4.5 has a Hopf algebraic formulation. Recall that the universal property of U(ḡ) implies that the R-matrices r + := −π + and r − := π − become unital algebra morphism from U(ḡ) to U(g). where µ denotes the product in U(g). Remark 4.9. For X ∈ U(g), using the Sweedler notation, one can write where |X| denotes the degree of a homogenous X ∈ U(g). We can now prove the following result. Proposition 4.10. The map F is an algebra morphism from U(ḡ) to U * (g), i.e., it is a linear map such that: for all monomials x 1 · · · x n ∈ U(ḡ). Proof . First observe that since every x ∈ḡ is a primitive element in U(ḡ), Then note that for every x, y ∈ g one finds: Let X := x 1 · x 2 · · · x n = x 1 · X ∈ U(ḡ), x i ∈ g, and calculate We conclude this section by observing that using the map F of Definition 4.8, one can recover the factorization described in Proposition 4.5. Corollary 4.11. For every v ∈ g, Proof . The result follows from the definition of the map F and from the property of exp · (v) being a group-like element in U(ḡ). Remark 4.12. We compare the map F defined in (4.3) with formula (1.19) in [16]. The authors of [16] work with a factorizable r-matrix, i.e., with an element r ∈ g ⊗ g, satisfying the classical Yang-Baxter equation, and having a symmetric part defining a linear isomorphism I : g * → g. The element r permits one to define a Lie algebra struture on the dual vector space g * . Let U(g * ) be the corresponding universal enveloping algebra. Then in [16] it is proven that the map is a linear isomorphism extending I. It is easy to show that the Hopf algebra U(g * ) is isomorphic to U(ḡ), where the R-matrix defining the Lie algebra structure inḡ is obtained from r and I as R := r • I. Here r : g * → g is defined by β, r(α) = α ⊗ β, r , for all α, β ∈ g * . After identifying U(g * ) with U(ḡ) using this isomorphism, the map I becomes the map F , seen as a linear map between U(ḡ) and U(g). Proposition 4.10 states that, at the cost of trading the associative product of U(g) for the product * defined in (4.1), F becomes an isomorphism of associative algebras. It is also worth mentioning that in [18], the (inverse) of the linear isomorphism defined in formula (4.3) was shown to restrict to an algebra homomorphism between the center of U(g) and U(g). This homomorphism was then used in the same paper to define a quantization of the symmetric algebra S(g), i.e., a linear map q : S(g) → U(g), compatible with the filtrations and preserving the so called total symbol of the elements of U(g), see [18, p. 3414]. The quantization of S(g) so obtained was then used to construct the quantum integral of motions for systems with linear Poisson brackets. Regarding the aforementioned remark, we should emphasize that in this note our main interest lies in applying Magnus type formulas combined with post-Lie structures to solve classical Lax type equations, we will refrain from further commenting on possible applications of the formalism here introduced in the context of the quantum dynamical systems. Post-Lie algebra and Lie-bracket f lows In this section the assumptions and notations made explicit in items (ii) and (iii) of Remark 4.2 apply 1 . We start by recalling the so-called BCH-recursion [9]. It is defined for any element x ∈ g through the recursion χ(tx) := tx + BCH(−π + (χ(tx)), tx) ∈ U(g) [[t]]. (5.1) Remark 5.1. Note that χ(tx) is a formal power series whose coefficients are iterated commutators. For this reason it belongs to the vector subspace g[[t]] ⊂ U(g) [[t]], whose elements are formal power series with coefficients in U(g) of degree 1. If g = g + ⊕ g − , and if the corresponding projectors π ± : g → g ± satisfy (2.1), then the above factorization is unique. Recall that the binary product x y := −[π + (x), y] defines a post-Lie algebra structure on g, and is the double Lie bracket (2.2). Remark 5.2. In what follows we will work simultaneously with U(g) [[t]] and U * (g) [[t]], which are the rings of formal power series with coefficients in U(g) and U * (g), respectively. Note that these two rings are endowed with a natural Hopf algebra structure, inherited from U(g) and U * (g). Furthermore, note that the last statement in Theorem 4.1 implies that U * (g) [[t]] is isomorphic, as a Hopf algebra, to U(ḡ) [[t]]. Now we suppose that ξ ∈ U(g). For notational simplicity we assume that ξ = a n , a ∈ g. Conclusions We have addressed the problem of finding explicit solutions to isospectral flow equations. These form a class of differential equations usually encountered in the theory of (Hamiltonian) dynamical systems, and commonly studied using methods borrowed from Lie theory. In general, their solutions are obtained starting from the existence of a solution of the modified classical Yang-Baxter equation. In other words, from the existence of a classical R-matrix on the underlying Lie algebra. We have outlined how to approach the problem using a framework based on a particular class of non-associative algebras known as post-Lie algebras. More precisely, starting from a classical R-matrix solving the modified classical Yang-Baxter equation on a (finite-dimensional) Lie algebra g, we showed how the corresponding post-Lie algebra structure on the Lie enveloping algebra U(g) allows us to describe explicit solutions of exponential type to any isospectral flow equation defined on the original Lie algebra g.
5,628
2015-05-10T00:00:00.000
[ "Mathematics" ]
Effects of bioactive compound, Ginsenoside Rb1 on Burn Wounds Healing In Diabetic Rats: In�uencing M1 To M2 Phenotypic Trans Panax notoginseng (P.notoginseng) has been used traditionally to treat traumatic injuries.Ginsenoside Rb1, a key active ingredient derived from Panax notoginseng, has received a lot of interest due to its anti-inammatory, bacteriostatic, and growth-promoting effectsoncells.The therapeutic bene�ts of ginsenoside Rb1 on burn wounds in STZ-induced diabetic rats, as well as the probable underlying processes, were investigated in this work. The skin wound healing effect of ginsenoside Rb1 (0.25% and 0.5% w/w) in a rat model of burn wounds in diabetic rats was observed at various time points after treatment. On days 5 and 19 following treatment,immunohistochemistry and Western blot analysis forIL-1 β , TNF-α , CD68 and CD163 of biological tissues were done. The macroscopic observation was used to track the healing of skin wounds at various periods. The protein expression of CD68 and CD163, which serve as M1 and M2 macrophage markers, was examined in detail. More notably, the ability of ginsenoside Rb1 to alter in�ammatory markers (IL-6) and anti-inammatory markers (IL-10), in�uence on hydroxyproline and hexosamine was observed. As indicated by increased CD163 (M2) and reduced CD68 (M1) on day 5, ginsenoside Rb1 effectively �ips the M1 to M2 phenotypic transition at the right time to improve burn wound healing in diabetic rats.Ginsenoside Rb1(0.5% w/w) treatment showed higher tensile strength, anti-inammatory properties, antioxidant properties, increased tissue hexosamine and hydroxyproline levels. Skin tissue morphology was signi�cantly improved following 19 days of ginsenoside Rb1 (0.5% w/w) therapy, according to hematoxylin-eosin and Masson's trichrome staining. Furthermore,Ginsenoside Rb1 (0.5% w/w) favoured the in�ammatory phase of burn wound healing (IL-6), assisted the proliferation process (IL-10) and had considerably lower expression of IL-1 β and TNF-α on the later stage of wound healing.Overall, the data showed that ginsenoside Rb1(0.5% w/w) accelerates burn wound healing in diabetic rats through a mechanism that may be linked to the M1 to M2 phenotypic shift. Introduction The healing of a burn wound is a multi-step process that begins with the in ammation and terminates with epithelialization.[1] Diabetes may impact the length of a burn patient's hospital stay.[2] Moreover, hyperglycemia is linked to a higher risk of overall morbidity in burn patients.[3] Furthermore, as diabetes people can't distinguish between hot and warm due to a loss of sensation in their lower extremities, they are more likely to get foot burns from electric heating, pads, water baths, and foot spas.[5] Burn damage treatment is considered an unmet clinical need, with no satisfactory solution available to date.[5]Because diabetes is a global epidemic,healthcare practitioners will face more challenges in treating diabetic burn victims.[6]The usage of Ginsenoside Rb1 in a diabetic thermal wound rat model was examined in this work. In treating second-degree burns, silver sulfadiazine (SSD) is routinely used for the prevention of burn wound infections and helps to reduce symptoms.[7] However, considering the side effects of these medications (antibacterial activity-related side effects, cytotoxicity, and so on), [8]the prognosis of some patients remains bleak.As a result, new local topical medications for treating wounds and scalds are urgently needed, with proven therapeutic e cacy and fewer adverse effects than currently available treatments.Panax notoginseng is a traditional herbal medication used to treat in ammatory diseases, cardiovascular illnesses, traumatic injuries, and external and internal bleeding caused by damage.[9]Because of its potential anti-in ammatory, antioxidant, and cell growth-promoting activities, ginsenoside Rb1, an essential active ingredient of P. notoginseng, has gotten a lot of attention.[10][11][12][13]But, the role of Ginsenoside Rb1has not been investigated in diabetic burn wounds.Thus, the effects of ginsenoside Rb1(0.25% and 0.5% w/w) onhealing of burn wounds in diabetic rats and the processes behind these bene ts were investigated in this work to contribute to a scienti c foundation for the therapeutic use of ginsenoside Rb1 to treat burns in diabetic rats. Ointment processing Ginsenoside Rb1 (purity > 99.0%) was procured fromSelleckChem.A basic ointment base in the ratio of 1:6:3 will be prepared using liquid para n, propyleneglycol, and glycol stearate, respectively.[14]Adequate levels of test substance will be added to the base ointment for preparing two doses of test ointments-Ginsenoside Rb1 (High Dosage (0.5% w/w) and low dose (0.25% w/w).The ointment base will be applied topically to the vehicle group.For positive control, 1% silver sulphadiazine ointment will be used as a reference drug.During the treatment period, 0.5 g of the test ointments or reference drug will be administered to the wound sites topically once a day. Experimental Animals And Housing Conditions: Wistar albino rats of both sexes weighing 150-250 g were used to test burn wound healing capabilities.The animals were housed in poly-propylene cages with optimum humidity, light, and temperature [Temp: 25 ± 2ºC, 75% relative humidity, andlight/dark cycles (12/12 h)].Theanimals were fed a commercial pellet diet for rats and adequate water for at least one week before testing.The Institutional Animal Ethics Committee (IAEC) of Erode College of Pharmacy, Erode, Tamilnadu, India (565/02/CA/18/CPCSEA) approved all experimental procedures.Experiments were carried out as per the guidelines for laboratory animal care and use. Induction Of Diabetes: Animals were kept starving overnight.Nicotinamide (HiMedia Labs Pvt.Ltd.) was given at a 110 mg/kg body weight dose 15 minutes before streptozotocin (STZ) was given.A 65 mg/kg dose of STZ (Sigma, USA) solution dissolved in a citrate buffer with a pH of 4.5 was given intraperitoneally (i.p).Further, to minimize hypoglycemia caused by increased pancreatic insulin secretion, a 10% glucose solution s were given to rats for an additional 24 hours following STZ treatment.Blood was drawn from the rats' tail veins 72 hours after they received STZ injections.Diabetic rats were de ned as those that have blood glucose level of > 200 mg/dl at fasting and were included in this investigation.[15] Wound Healing Activity Thermal burn wound model: The rats have split into ve groups of six ratsfor the thermal wound model.The rst group is non-diabetic (normal control), and the second group is diabetic (diabetic control) receiving simple ointment.The third received silver sulfadiazine (1% w/w).The fourth and fth groups receivedginsenoside Rb1 (0.25% w/w and 0.5% w/w) respectively.The third set was utilized to assess wound closure and to do further biochemical testing.The dorsal skin hairs were carefully removed one day before the burn.For 24 hours, the animals were monitored to check if shaving had caused any irritation.A metal rod of 2.5 cm diameter was heated to a temperature of 80-85°C, and for 20 s, it was pressed on the dorsal skin of rats to create thermal burn injuries.When the animals had recovered from the anaesthetic, the wound was dressed with a clean, sterile gauge, and they were maintained separately.The burn was treated daily with drugs.The wound closure rate was recorded using transparent paper and a permanent marker on the 5th, 10th, 14th, 17th and 19th post-wounding days.[16]The percentage of wound closure was calculated using the method below for the nal analysis of the data.[17][18] % Wound closure = [(Day 0 wound area -Day "n" wound area)/ Day 0 wound area] x 100 Where n = 5th day, 10th day, 14th, 17th and 19th post wounding days Biochemical Analyses: At the end of the test, the rats with burn wounds were sacri ced to analyze the healing process in terms of the biochemical characteristics.The burn wounds area of experimental rats was excised to assess tissue hydroxyproline and hexosamine. Estimation Of Hydroxyproline And Hexosamine Hydroxyproline, the most crucial indicator of collagen turnover, was evaluated in the burn wounds granulation tissue.Tissues were dried at 60-70°C in a hot air oven to a consistent weight, then hydrolyzed in 6 N HCl in a sealed tube for 4 hours at 130°C.After neutralization to pH 7.0, the hydrolysate was subjected for 20 minutes to chloramine-T oxidation before being halted by the addition of 0.4 M perchloric acid.The colour was made at 60°C using Ehrlich reagent and detected with a UV-Vis spectrophotometer at 557 nm (Shimadzu, Columbia, MD).[19] The weighed granulation tissues were subjected to hydrolysis for 8 hours at 98°C in 6 N HCl, neutralization donewith 4 N NaOH at pH 7, and further diluted with distilled water for the measurement of hexosamine.After mixing with acetylacetone solution for 40 minutes, the diluted solution was heated to 96°C.Ethanol (96%) was added after cooling the mixture, and then a solution of pdimethylaminobenzaldehyde (Ehrlich's reagent) was added.After the solution had been well mixed and allowed to cool for 1 hour, at 530 nm, the absorbance was measured using a Shimadzu double beam UV-Vis spectrophotometer.The amount of hexosamine was determined using a standard curve. Hexosamine concentration was determined in milligrams per gram of dry tissue weight.[19] Estimation Of Antioxidant Activity On day 8, blood was taken from the retro-orbital plexus of burn wound animal models and centrifuged for 10 minutes at 506.11 rpm (Microcentrifuge) to separate plasma to test antioxidant activity.The serum was used to perform the antioxidative enzyme test.To assess the degree of lipid peroxidation (LPO), thiobarbituric acid reactive substances quantity was tested using the Uchiyama and Mihara procedure The Sedlak and Lindsay methodology was used to assess the reduced glutathione (GSH)levels, whereas the Kono method evaluated the superoxide dismutase (SOD)activities.Aebi's standard procedure was used to assay catalase (CAT).[19] Estimation Of Pro-in ammatory And Anti-in ammatory Cytokine Induction: On days 2 and 10, the blood samples were obtained from burn wound animals in each group following wounding.The amounts of pro-in ammatory (IL-6) and anti-in ammatory (IL-10) cytokines were assessed using commercially available enzyme-linked immunosorbent assays (ELISAs).The tests were carried out according to the instructions of the manufacturer.The concentrations of cytokine were determined in pg/ml by drawing the curve for the standard.Each experiment was done three times to ensure that the results were correct.[20][21] Histopathology, Immunohistochemistry and Western blot analysis for IL-1β TNF-α, CD68 and CD163. The wound-healing tissue was removed and then xed in buffered formalin.Later, it is processed in a series of alcohol and xylene and embedded in para n blocks on the 5th and 19th days.The repair effect was assessed by examining the stained sections using an optical microscope using hematoxylin-eosin (H&E) and Masson's trichrome staining. 18The sections were treated with primary antibodies to IL-1β and TNF-α for immunohistochemistry.The manufacturer's protocol was followed for all phases of immunohistochemical staining.A uorescence microscope was used to examine the immunohistochemistry samples. Wound skin samples were fully homogenized in the presence of lysis buffer (PBS, pH 7.4) before being centrifuged at 10,000 rpm for 10 minutes.The proteins that were prepared were electrophoresed on 10% sodium dodecyl sulfate (SDS)-polyacrylamide gels.After that, the proteins were moved to PVDF Western blot membranes for 2 hours at 40 V, primary anti-bodies were overnight incubated at 4°C.After that, the membrane would be incubated for 1 hour at 22°C with the HRP-conjugated anti secondary antibody.The membrane was then examined using an X-ray lms and an improved chemiluminescent reagent. Western blot analysis to examine CD68 and CD163: A 30 lb protein sample electrophoresed on a 10% SDS-PAGE gel was used to examine CD68 and CD163.For 1 hour, the gel was then blocked at room temperature using a blocking solution over a polyvinylidene uoride or polyvinylidene di uoride membrane (5% skim milk powder TBST).After that, primary antibodies were used to probe the membranes (Abcam, Cambridge, UK, 1:1000).The next day, the membranes were cleaned and incubated with secondary antibodies (horseradish peroxidase-conjugated antimouse or antirabbit at 1:2000).Chemiluminescence substrate was used to enhance proteins, and the Chemidoc XRS plus system was used to scan them (Bio-Rad).The ndings were represented in standard units (Biotech Inc).The gene b-actin is employed as a housekeeping gene.[21] Statistical analysis: Dunnett's test was used to analyse differences between means after data were submitted to analysis of variance (ANOVA).At a P < 0.05 threshold of signi cance, a substantial difference was considered.The mean and standard error of six animals' mean (SEM)is represented (n-6). Result Burn wound healing: In thermally produced burn wounds, the Ginsenoside Rb1showed a signi cant (P < 0.05) improvement in wound contraction percentage compared to the control group.In most post-wounding days, the Ginsenoside Rb1 ointment (0.5% w/w) rats had a more prominent and signi cant (P < 0.05) proportion of wound contraction than the normal control rats (Table -1).On the 19th day of the study, the Ginsenoside Rb1 ointment (0.5% w/w) and Silver sulfadiazine treated groups had the highest percentage of wound closure, with 99.23 ± 3.41 71.36 ± 3.21, respectively (Table 1).Consequently, ginsenoside Rb1 ointment (0.5% w/w) was shown to be the most effective treatment.Hexosamine and hydroxyproline concentrations in healed wound tissue are shown in Table 2.The content of hydroxyproline and hexosamine in diabetic wound control was signi cantly (P < 0.05) reduced. The treatment groups had signi cantly (P < 0.01) increased hydroxyproline and hexosamine concentrations than diabetic wound controls. On the other hand, on the 10th day following wounding, diabetic control rats had a high IL-6 level (97.4 ± 15.1pg/ml).The level of IL-6 in the Silver sulfadiazine treated group (101.0 ± 14.4pg/ml) on 2nd Dayafter wounding was substantially (P < 0.05) less compared to the animals in the diabetic control group, and the drop in IL-6 levels was maintained at 70.5 ± 11.5pg/ml, on10th dayafter wounding. Effect Ginsenoside Rb1on Protein Expression Of Cd68 And Cd163 In Experimental Rats: CD68 and CD163 protein expression in the wound skin region of burn wounds was detected using Western blotting (Table -4).When comparing Group II (diabetic control rats) to Group 1 (normal control rats), there was an up-regulation of both CD68 and CD163 levels after the therapy.In diabetic rats treated with Ginsenoside Rb1 (0.5% w/w), however, there was a substantial (P < 0.05) downregulation of CD68 and an increase in CD163 levels.All treatment groups had higher levels of CD163 by day 5, with Ginsenoside Rb1 (0.5% w/w) having the most considerable rise.Histopathologic Report: Ginsenoside Rb1 (0.5% w/w) treated group of burn wounds biopsy demonstrated virtually repaired skin architecture with normal epithelization, brosis within the dermis and restitution of the adnexa (Fig. 1), compared to the reference standard Silver sulfadiazine (1% w/w). Masson's trichrome staining could be used to assess the quantity of new collagen deposition (Fig. 2).The Ginsenoside Rb1(0.5% w/w)group showed mature and well-developed collagen depositions.Finally, the ndings revealed that wounds treated with Ginsenoside Rb1(0.5% w/w) showed minor in ammation, practically complete re-epithelialization, and well-organized collagen deposition. All groups had varying degrees of in ammation on the 19th day after treatment, and the group control had signi cant expressions of IL-1β and TNF-α, as shown in Figure -3.The SSD and Ginsenoside Rb1 (0.25%, 0.5% w/w) groups had lower expression than the control group; however, the Ginsenoside Rb1 (0.5% w/w) group had lower expression considerably lower expression of IL-1β and TNF-α on the 19thday post-treatment. Discussion Even in physically healthy people, burn injuries are frequently accompanied by consequences.People with diabetes may have a higher risk of complications and death.Wound healing failures and infections are also common in people with diabetes.[22]Because numerous components of wound healing physiology are disrupted in diabetes wounds, resulting in delayed wound healing and persistent in ammation.Less endothelial progenitor cells, lower endothelium-derived nitric oxide synthase activity, a growth factor shortage, reduced macrophage activity, reduced collagen deposition, increased ECM proteolysis, and the switch from M1 to M2 phenotype are all examples of molecular imbalances.[23]For resolving in ammation and changing the balance toward tissue repair, this transition from M1 to M2 is critical.[24] Ginsenoside Rb1 (0.5% w/w) has been reported to have a potent healing effect on burn wounds by several mechanisms, including enhanced vascularization in the surrounding tissue, production of Interleukin 1 beta (IL-1β) and vascular endothelial growth factor (VEGF) from the burn wound.The stimulation of VEGF synthesis and increases in expression of hypoxia-inducible factor (HIF)-1 in kerationcytes and an increase in IL-1 owing to macrophage buildup in the burn site are all contribute to angiogenesis.Also, by promoting the bio-active substances (histamine, SP, and MCP-1),ginsenoside Rb1(0.5% w/w) facilitate burn wound healing.[25] Ginsenosides was also reported to promote wound healing by activating the mitogen-activated protein kinase pathway, stimulating intracellular cAMP levels and associated protein expression in the nucleus, enhancing the dermal broblast proliferation and collagen synthesis.Furthermore, ginsenoside Rb1 enhances skin keratinocyte movement and myo broblast transformation in senescent dermal broblasts of human skin by stimulating the production of growth factors, including a sequence of SASP factors. [26]In addition to the above mechanism,M1 to M2 transition is crucial, as it shifts the wound from the in ammatory phase to tissue healing. Wound healing is a complicated biological process divided into four stages: haemostasis phase (0several hours after damage), in ammation phase (1-3 days), proliferation phase (4-21 days), and remodelling phase (21 days-1 year).[27] Any of these interrupted stages leads to poor healing, such as chronically di cult-to-heal ulcers or extensive scarring, which has a signi cant and rising health and cost burden on our society.[27][28][29]The transition from the in ammatory stage to the regenerative stage of wound healing is vital, and evidence is growing that a faulty transition is associated to wound healing di culties.As a result, therapeutic developments focussing on this shift could be justi ed.[18]In order to protect from infections and removing dead tissues, the in ammatory phase is necessary as it brings haemostasis and activates innate immune system.[30]On the other hand, if the in ammation is prolonged, it may interfere with keratinocyte differentiation and activation, and obstruct wound healing from progressing through the usual stages.[28]Furthermore, persistent in ammation in chronic in ammatory situations, such as diabetic wounds, is expected to raise metalloproteinases and other proteases, which degrade ECM components and growth factors essential for healing.[23]Furthermore, a lot of scarring has been associated with persistent in ammation.[31] During wound healing, macrophages switch from a pro-in ammatory M1 phenotype to a tissue-repair M2 phenotype.This produces anti-in ammatory mediators like decoy IL-1 receptor type II, IL-10, and IL-1R antagonist, as well as bioactive molecules like VEGF, IGF1 and TGF that promote ECM synthesis, broblast proliferation, and angiogenesis.[32,33]The transition from M1 to M2 is crucial for resolving in ammation and shifting the balance toward tissue healing.[24]In both animal and human wounds, continuous IL-1 β (pro-in ammatory cytokines) blocked the upregulation of proliferator-activated receptor (PPAR)γ activity, which is essential for macrophage phenotypic transformation.As a result, it was discovered that diabetes induces a faulty M1-M2 transition, which delays wound healing.[34]As a result, regulation of the above pathways is required for optimal wound healing. [36] The levels of hydroxyproline and hexosamine in the tissue were examined since they are directly related to collagen production and extracellular matrix development, respectively.[37] Whenginsenoside Rb1(0.5% w/w) treated burn wounds were compared to untreated diabetic control rats, signi cantly higher levels of hydroxyproline and hexosamine (Table -2) were found (P < 0.05).Ginsenoside Rb1 activates macrophages, releasing cytokines and growth factors with antibacterial and anti-in ammatory properties and promotingmigration of dermal broblast to the lesion.In the wound, these broblasts multiply, creating extracellular matrix (ECM) biomaterials such as collagen to start the healing process.[38][39][40] The pro-in ammatory mediator IL-6 (Table -3) was noticed as soon as 12-24 hours after cutaneous damage, and these ingredients promote angiogenesis, which is essential in the in ammatory stage of wound healing.[41] More intriguingly, the outcomes of this investigation revealed that ginsenoside Rb1(0.5% w/w) did not affect IL-6 levels on day two samples (Table -3).This shows that during the early phases of recovery, ginsenoside Rb1(0.5% w/w) did not affect pro-in ammatory cytokines produced by macrophages.On the other hand, Ginsenoside Rb1(0.5% w/w) therapy increased IL-10 levels on day ten following burn injury.It's worth mentioning that IL-10 is a cytokine generated by T cells and macrophages with anti-in ammatory characteristics. 42The wound-healing environment appears to be altered by IL-10, which seems to reduce the expression of pro brotic/proin ammatory mediators, leading to a reduction in in ammatory cell recruitment to the wound.[42,43] Treatment with ginsenoside Rb1(0.5% w/w) increased serum IL-10 levels while decreasing IL-6 expression, especially on day ten after burn injury.As a result, ginsenoside Rb1 regulates proin ammatory and anti-in ammatory cytokines and the systemic immunological pathways that relate them to cellular proliferation. Biochemical analysis of plasma samples was performed to determine the function of anti-oxidants, proin ammatory, and anti-in ammatory mediators behind the bene cial effect of ginsenoside Rb1.In our research, ginsenoside Rb1(0.5% w/w) shown extraordinary antioxidant activity by substantially (P < 0.05) boosting the levels of antioxidant enzymes like SOD, CAT, and glutathione (GSH), suggesting that ginsenoside Rb1 could aid in the prevention of oxidative damage and the improvement of the healing process (Table-2). SOD-1 catalyzes the dismutation of superoxide radicals into dioxygen and hydrogen peroxide (H2O2), which are both potentially hazardous.The CAT activity of the ginsenoside Rb1(0.5% w/w) treated group was much higher, suggesting that elevated CAT may effectively neutralize H2O2 accumulated due to enhanced SOD activity.[44][45][46] GSH is also a critical endogenous thiol antioxidant that acts as a supporting factor for glutathione peroxidase (GPx) in removing lipid hydroperoxide. [46]Furthermore, when reactive oxygen species destroy polyunsaturated lipids, MDA, a secondary metabolite of LPO,is utilized to determine the level of osmotic damage in an organism. Conclusion The current work demonstrates the therapeutic potential of the ginsenoside Rb1(0.5% w/w) for treating diabetic burn wounds, as it ingeniously alters the transition from M1 to M2 phenotype at the right time to improve diabetic burn wound healing.On day 5, there was an increase in CD163 (M2) and a reduction in CD68 (M1).Furthermore, ginsenoside Rb1 (0.5% w/w) increased tissue hydroxyproline and hexosamine levels, which improved collagen production and extracellular matrix formation in diabetic burn wounds. Similarly, not interfering with the generation of pro-in ammatory mediators favoured the in ammatory phase of wound healing (IL-6).It also aided the proliferation process by enhancing anti-in ammatory mediator synthesis (IL-10).Overall, our data point to ginsenoside Rb1 (0.5% w/w) therapeutic potential as a stand-alone therapy or in conjunction with other standard burn care medicines for the successful treatment of diabetic burn wounds.Additional study is needed, however, to corroborate the current ndings. Abbreviations ). CD68 and CD163 are glycoproteins and markers of wound healing macrophages.This transition from M1 to M2 phenotype is crucial in diabetic wounds, and the ndings highlight the mechanism behind enhanced wound healing of ginsenoside Rb1(0.5% w/w) in diabetic animals with burn wounds.Furthermore, the reduced concentrations of TNF-α and IL-1β on day 5 in the ginsenoside Rb1(0.5% w/w) group (Figure-3) further supports the transition from M1 to M2 phenotype.The low TNF-α and IL-1β level are sustained throughout the healing period in ginsenoside Rb1groups (Figure-3). Figure 2 The 3 Figure 2 Table 1 Effect of Ginsenoside Rb1 on thermal burn wound healing in diabetic rats Table 2 Biochemical study of wound tissue in diabetic rats caused by streptozotocin The results are shown as mean ± S.E.M of six rats (n = 6).*P < 0.05 is the statistical difference from control Table 4 Effect of Ginsenoside Rb1on CD68 and CD163 in diabetic burn wounds in rats
5,273.8
2023-05-10T00:00:00.000
[ "Biology" ]
A Variable Control Chart under the Truncated Life Test for a Weibull Distribution In this manuscript, a variable control chart under the time truncated life test for the Weibull distribution is presented. The procedure of the proposed control chart is given and its run length properties are derived for the shifted process. The control limit is determined by considering the target in-control average run length (ARL). The tables for ARLs are presented for industrial use according to various shift parameters and shape parameters in the Weibull distribution. A simulation study is given for demonstrating the performance of the proposed control chart. Introduction A control chart is considered a powerful tool for maintaining the high quality of products in industry.This tool is used to monitor the industrial process from raw material to final product.The process can shift from the target value due to several extraneous factors.The control chart should provide a quick indication about a shift in a manufacturing process, if there is any.This quick indication helps the industrial engineer to look at the problem and bring it back into control state. Two types of control charts are widely used in the industry for monitoring the manufacturing process.Attribute control charts are used when the data is coming from the counting process and variable control charts are used when the data is obtained from the measurement process.The attribute control charts are easy to apply but the variable control charts are more informative than attribute charts.Usually, variables control charts are developed under the assumption that the characteristic follows the normal distribution. In practice, a control chart designed for normal distribution may not be effective for monitoring the manufacturing process when the characteristic of interest does not follow the normal distribution.The use of this type of control chart may lead to a wrong decision about the status of the manufacturing state.Derya and Canan [1] mentioned that "the distributions of measurements in chemical processes, semiconductor processes, cutting tool wear processes and observations on lifetimes in accelerated life test samples are often skewed".Several authors focused on this issue and designed control charts for non-normal data.Al-Oraini and Rahim [2] designed a control chart for a gamma distribution.Amin et al. [3] designed a non-parametric chart using a sign statistic.Chang and Bai [4] worked for a control chart for a positive skewed population, Chen and Yeh [5] worked for an economical statistical design of a control chart for a gamma distribution.McCracken and Chakraborti [6] presented a control chart for monitoring mean and variance for a normal distribution.Yen et al. [7] designed a Synthetic-type contro for time between events.Riaz et al. [8] proposed a median control chart and Abujiya et al. [9] worked for the cumulative sum control chart (CUSUM) control chart.Gonzalez and Viles [10] designed a R chart using gamma distribution, Lio and Park [11] designed a control chart for inverse Gaussian percentiles.Huang and Pascual [12] designed a control chart for Weibull percentiles using the order statistic.Derya and Canan [1] designed the control charts for the Weibull distribution, gamma distribution, and log-normal distribution and [13] proposed a control chart for the Burr type X distribution.Aslam et al. [14] deigned control chart for the exponential distribution using exponentially weighted moving average (EWMA) statistic. For the reliability evaluation of a product, the time truncated life test is often adopted in industries for saving the experiment time.Therefore, the designing of a control chart under the time truncated test is important when monitoring the process in terms of reliability.Aslam et al. [15] proposed an attribute control chart under the time truncated life test by assuming that the failure time of a product follows the Weibull distribution.Aslam et al. [16] designed a time truncated attribute control chart using the Pareto distribution.Aslam et al. [17] deigned a time truncated control chart for the Birnbaum-Saunders distribution under repetitive sampling.More details about such control charts can be read in Aslam et al. [18], Arif et al. [19], Khan et al. [20], and Shafqat et al. [21]. In summary, control charts under the time truncated life test for various situations or distributions are available for the attribute quality of interest.However, these control charts cannot be applied for the monitoring of a measurable quality of interest.From exploring the literature and from the best of our knowledge, there is no work on the design of a variable control chart under the time truncated life test.In this paper, we will propose a variable control chart for a Weibull distribution using failure data from a time truncated life test.A real example is given for illustration purposes and a simulation study is added to demonstrate the performance of the proposed control chart. Proposed Chart and Its Average Run Length (ARLs) The assumptions of the proposed control chart are stated as follows: 1. It is assumed that the quality characteristic of interest follows a Weibull distribution.2. The shape parameter β of the Weibull distribution is assumed to be known. 3. When the process is shifted, the scale parameter is changed to λ 1 = cλ 0 , where c is a shift constant and λ 0 is the scale parameter for the in-control process.The shape parameter remains unchanged when the process is shifted. 4. Failure times are measured during the time truncated life test. Let the variable of interest follow the Weibull distribution with the cumulative distribution function (cdf) of: where β is the shape parameter and λ is the scale parameter.The average life time, µ, of the variable is given by: where Γ(.) is a gamma function. The proposed control chart is operated as follows: Step 1 Take a sample of size n at each subgroup from the production process.Put them on test until the specified time t 0 . Step 2 Obtain the time to failure of item i (denoted by X i ).Set X i = t 0 if item i does not fail by time where L 3 represents the control limit. Distribution of Control Statistics and In-Control Average Run Length (ARL) To derive the necessary measures of average run length (ARL), it would be convenient to select the specified time t 0 = aµ 0 , as a fraction of the mean for the in control process, where a is a constant and µ 0 is the target mean. As the transformed variable Y = X β is modeled to an exponential distribution with mean λ β , the sum of Y's (or Y) follows a gamma distribution.But, the distribution of Y approximately follows a normal distribution according to the central limit theorem, as long as the sample size is sufficient.Then, the expected value of Y can be obtained by: Here, E[Y i ] is obtained by: Equation ( 4) reduces to: The variance of Y is given as follows: The simplified form of Var[Y] can be rewritten as: Let P in,0 be the probability of being declared as in control when the process is actually in control at λ 0 .Then, it is given as follows: Finally, P in,0 can be written as follows: The efficiency of the proposed chart will be measured through the ARL, which shows when the process will be out-of-control.Let ARL 0 be the ARL for in control process.Then, it is given as follows: Out of Control ARL Let us assume that the process has shifted due to some factors to a new scale parameter λ 1 = cλ 0 , where c is the shift constant.Now, we derive the measures for the shifted process.Let P in,1 be the probability of declaring the state of being in control when the process has shifted to λ 1 , which is given as follows: The distribution of Y at λ 1 approximately follows normal with the mean and the variance given below: Therefore, the in-control probability for the shifted process, say P in,1 in Equation ( 11), is obtained by: Hence, the out-of-control ARL, say ARL 1 for the shifted process is given as follows: The values of ARL 1 for the various values of the shape parameters, µ 0 , c and, r 0 , are presented in Tables 1-8.Let r 0 be the specified in-control ARL.The control limit L 3 is determined by using the following algorithm: 1. Prefix the value of r 0 . 2. The value L 3 is obtained such that ARL 0 ≥ r 0 .From Tables 1-8 and Figure 1, we note the following trends in ARL 1 . 1. As a increases, we note ARL 1 to decrease for the same shift in process.This seems reasonable because the number of failures observed increases as a increases. 2. For a fixed a, as c decreases, ARL 1 also decreases because the true mean time to a failure decreases. 3. For other fixed parameters, as µ 0 increases from 50 to 100, the behavior of ARL 1 remains quite similarly, which is a desirable feature.4. For other fixed parameters, as β increases from 0.5 to 2, ARL 1 decreases. Simulation Study In this section, we discuss the performance of the proposed control chart using simulated data.The data is generated from the Weibull distribution with the shape parameters = 1.5 and 0 = 50.The first 20 observations are generated from an in-control process and the next 20 observations are Simulation Study In this section, we discuss the performance of the proposed control chart using simulated data.The data is generated from the Weibull distribution with the shape parameters β = 1.5 and µ 0 = 50.The first 20 observations are generated from an in-control process and the next 20 observations are generated from a shifted process with c = 0.8. We applied this data to the proposed control chart having a = 1 and r 0 = 370.The control limit L 3 is obtained by 173.68.The values of Y i = X 1.5 i are calculated and plotted on the control chart.From Figure 2, we note that the proposed chart detects the shift in the process at the 31st sample. Real Example In this section, we will discuss the application of the proposed control chart using the breaking stress data of carbon fibers from an industry.Carbon fiber has good tensile strength, which is measured in force per unit area (GPa).For more details, see Yu et al. [22].The carbon fibers data given in GPa is modeled by the Weibull distribution when = 1.For this study, let = 30 and 0 = 370.The values of statistic ̅ for this data are shown below: 404,377,420,698,402,421,352,303,544,413,365,359,383,330,451,402,355,368,342,383,444,368,364,433,443,508,447,528,512,559,481,325,303,410,628,373,571,414,364,328. By plotting ̅ on a control chart, Figure 3 shows that the process is in control but some points are close to the lower control limit. Conclusions In this paper, a new variable chart under the truncated life test is designed for the Weibull Real Example In this section, we will discuss the application of the proposed control chart using the breaking stress data of carbon fibers from an industry.Carbon fiber has good tensile strength, which is measured in force per unit area (GPa).For more details, see Yu et al. [22].The carbon fibers data given in GPa is modeled by the Weibull distribution when = 1.For this study, let = 30 and 0 = 370.The values of statistic ̅ for this data are shown below: 404,377,420,698,402,421,352,303,544,413,365,359,383,330,451,402,355,368,342,383,444,368,364,433,443,508,447,528,512,559,481,325,303,410,628,373,571,414,364,328. By plotting ̅ on a control chart, Figure 3 shows that the process is in control but some points are close to the lower control limit. Conclusions In this paper, a new variable chart under the truncated life test is designed for the Weibull distribution.The average run length is derived to measure the efficiency of the proposed chart.Extensive tables are given for industrial use.The application of the proposed chart is given with the Figure 2 . Figure 2. The proposed chart using simulated data. Figure 3 . Figure 3. Proposed chart for the carbon fibers data. Figure 2 . Figure 2. The proposed chart using simulated data. Figure 2 . Figure 2. The proposed chart using simulated data. Figure 3 . Figure 3. Proposed chart for the carbon fibers data. Figure 3 . Figure 3. Proposed chart for the carbon fibers data.
2,811.2
2018-06-10T00:00:00.000
[ "Engineering", "Mathematics" ]
Uncertainty in river discharge observations: a quantitative analysis This study proposes a framework for analysing and quantifying the uncertainty of river flow data. Such uncertainty is often considered to be negligible with respect to other approximations affecting hydrological studies. Actually, given that river discharge data are usually obtained by means of the so-called rating curve method, a number of different sources of error affect the derived observations. These include: errors in measurements of river stage and discharge utilised to parameterise the rating curve, interpolation and extrapolation error of the rating curve, presence of unsteady flow conditions, and seasonal variations of the state of the vegetation (i.e. roughness). This study aims at analysing these sources of uncertainty using an original methodology. The novelty of the proposed framework lies in the estimation of rating curve uncertainty, which is based on hydraulic simulations. These latter are carried out on a reach of the Po River (Italy) by means of a one-dimensional (1-D) hydraulic model code (HEC-RAS). The results of the study show that errors in river flow data are indeed far from negligible. Introduction In recent years, there has been an increasing interest in assessing uncertainty in hydrology and analysing its possible effects on hydrological modelling (Montanari and Brath, 2004;Montanari and Grossi, 2008). Uncertainty has been recognised to be important in the communication with end users (Beven, 2006;Montanari, 2007) and to play a key role in the context of prediction in ungauged basins (PUB). Furthermore, uncertainty assessment is one of the key tasks of the PUB initiative launched in 2003 by the International Association of Hydrological Sciences (Sivapalan et al., 2003). Correspondence to: G. Di Baldassarre<EMAIL_ADDRESS>Indeed, hydrologists are well aware that a significant approximation affects the output of hydrological models. Uncertainty is caused by many sources of error that propagate through the model therefore affecting its output. Three main sources of uncertainty have been identified by hydrologists (e.g. Goetzinger and Bardossy, 2008): (a) uncertainty in observations, which is the approximation in the observed hydrological variables used as input or calibration/validation data (e.g. rainfall, temperature and river); (b) parameter uncertainty, which is induced by imperfect calibration of hydrological models; (c) model structural uncertainty, which is originated by the inability of hydrological models to perfectly schematise the physical processes involved in the rainfall-runoff transformation. Among these, observation uncertainty is often believed to play a marginal role, given that it is often considered negligible with respect to (b) and (c). Hence, only few attempts have been made to quantify the effects of the observation uncertainty on hydrological and hydraulic modelling (e.g. Clarke (1999); Pappenberger et al., 2006). Nevertheless, the estimation of the uncertainty in observations with which the model is compared should be the starting point in model evaluation. For instance, the methodology recently proposed by Liu et al (2009) to assess model performance by using limits of acceptability (Beven, 2006) is based on the assessment of observation uncertainty. Already 20 years ago, Pelletier (1987) reviewed 140 publications dealing with uncertainty in the determination of the river discharge, thereby providing an extensive summary. Pelletier (1987) referred to the case in which river discharge is measured by using the velocity-area method, which is based on the relationship: v(x,t), which are due to imprecision of the current meter, variability of the river flow velocity over the cross section and uncertainty in the estimation of the cross section geometry. Pelletier (1987) highlighted that the overall uncertainty in a single determination of river discharge, at the 95% confidence level, can vary in the range 8%-20%, mainly depending on the exposure time of the current meter, the number of sampling points where the velocity is measured and the value of v(x, t). Other contributions reported errors around 5-6% (Leonard et al., 2000;Shmidt, 2002). In addition, the European ISO EN Rule 748 (1997) describes a methodology to quantify the expected errors of the velocity-area method. It is important to note that, in operational practice, river discharge observations are usually obtained by means of the so-called rating curve method (e.g. World Meteorological Organisation, 1994). According to this technique, measurements of river stage are converted into river discharge by means of a function (rating curve), which is preliminarily estimated by using a set of stage and flow measurements. Hence, an additional error is induced by the imperfect estimation of the rating curve. In this paper, the river discharge estimated through the rating curve method is denoted by the symbol Q(x, t). This study aims at proposing a framework for assessing the global uncertainty affecting Q(x, t), which obviously depends on the specific test site considered. In particular, approaches described by previous studies (e.g. Herschy, 1970Herschy, , 1975European ISO EN Rule 748, 1997) are applied to estimate the uncertainty of Q (x,t) (velocity-area method), while an original methodology is developed to analyse additional sources of error in the river discharge observation, Q(x, t), related to the uncertain estimation of the rating curve. Uncertainty in river discharge observations A full comprehension of the uncertainty that affects the rating curve method for discharge measurement requires a description of the procedure itself. In order to estimate the rating curve, field campaigns are carried out to record contemporaneous measurements of river stage h(x,t) and river discharge Q (x,t), evaluated by using the velocity-area method. These measurements allow the identification of a number of points (Q (x,t); h(x, t)) that are then interpolated by using an analytical relationship as rating curve. Once the rating curve is estimated, the observed river discharge Q(x,t) at arbitrary time t can be operationally obtained by measuring the river stage h(x, t). A function widely used as rating curve in river hydraulics (characterised by some physical justifications) is the power function (e.g. Dymond and Christian, 1982;Herschy, 1978;Pappenberger et al., 2006): where c 1 , c 2 and c 3 are calibration parameters, usually estimated by means of the least squares method (e.g. Petersen-Øverleir, 2004). Polynomial functions can also be used as rating curves (e.g. Yu, 2000): Obviously, in order to estimate a reliable rating curve, the reduction of the uncertainty of the measurements Q (x,t) is required. The European ISO EN Rule 748 (1997) provides guidelines to this end by establishing an international standard for Europe. Accordingly, the measurement of Q (x,t) should be carried out as follows. First of all one should measure the river flow velocity along a number of vertical segments lying on the cross section. When the cross section width exceeds 10 m, v(x, t) should be measured along at least 20 verticals that should be placed so that the river discharge in each subsection is less than 5% of the total; the number and spacing of the velocity measurements along each vertical should be selected so that the difference in readings between two adjacent points is no more than 20% of the higher value. Once the velocity readings along each vertical are integrated over depth, the area of the obtained velocity curve gives the discharge per unit width along that vertical. The average of two subsequent area values gives the discharge per unit width in the subsection encompassed by the two verticals. Finally, the river discharge Q (x,t) is obtained by integrating the discharges in each subsection. A simple model for the error structure of the rating curve method In order to infer the error affecting river flow observations derived by the rating curve method, a model for the error structure is to be introduced. Given that the available information is often limited in practical cases, a simple model is proposed herein. The model aims at taking into account the main sources of uncertainty within a simplified approach. In this study, the uncertainty induced by imperfect observation of the river stage is neglected. This is consistent with the fact that these errors are usually very small (around 1-2 cm; e.g. Shmidt, 2002;Pappenberger et al., 2006) and therefore of the same order of magnitude as standard topographic errors. Moreover, the geometry of the river is assumed to be stationary, which means that the rating curve changes in time only because of seasonal variation of roughness (see below). This assumption has been made because the uncertainty induced by possible variations of the river geometry is heavily dependent on the considered case study and no general rule can be suggested. However, it is worth noting that, using this assumption, the study neglects one of the most relevant sources of uncertainty that may affect river discharge observations where relevant sediment transport and erosion processes are present. In view of the assumptions made, the following main sources of error affecting Q(x, t) can be identified: 1) error ε 1 (Q(x, t)) in the measurement Q (x,t) obtained with the velocity-area method; 2) error ε 2 (Q(x, t)) due to rating curve uncertainty which in turn is induced by 2.1) interpolation and extrapolation error, ε 2.1 (Q(x, t)), of the rating curve; 2.2) the presence of unsteady flow conditions, ε 2.2 (Q(x, t)); 2.3) seasonal changes of roughness,ε 2.3 (Q(x, t)). According to operational experience, ε 1 (Q(x, t)) and ε 2 (Q(x, t)) are independent. This study assumes that the global uncertainty,ε(Q(x, t)), affecting Q(x, t) can be obtained by: is assumed to be a Gaussian random variable (e.g. European ISO EN Rule 748, 1997) while ε 2 (Q(x, t)) is precautionarily assumed to be a binary random variable (see Sect. 2.3 below for more details) inferred by means of numerical simulations. Traditional approaches are used in this study to infer ε 1 (Q(x, t)), while original techniques are developed to evaluate the rating curve uncertainty ε 2 (Q(x, t)). The latter is a difficult task as the methodology depends on the available information. As a general framework, the study proposes the estimation of ε 2 (Q(x, t)) using a flood propagation model, under a set of simplifying assumptions. Some of these assumptions can be easily removed in practical applications, depending on the scope of the analysis and the available information. The proposed procedures for estimating ε 1 (Q(x, t)) and ε 2 (Q(x, t)) are described below. Uncertainty in river discharge measurements The uncertainty affecting the Q (x,t) measurements derived by the velocity-area method is mainly due to: the river flow during the measurement may be unsteady; the presence of wind may affect the reliability of the velocity measurement; the velocity measurement by the current meter may be imprecise even in ideal conditions; the measurement of the width, B, of the cross section and water depth, h i , along each i-th vertical segment may be affected by errors; the spatial variability of the flow velocity may induce estimation errors for the area of the velocity curve along the vertical segments and the mean velocity per unit width. This latter error is strictly related to the number of vertical segments. In order to quantify the uncertainty affecting Q (x,t) one needs to quantify the individual sources of error. The European ISO EN Rule 748 (1997) provides indications about the magnitude of these errors, at the 95% conFIdence level: the uncertainty X e affecting the measurement of the local flow velocity is about ±6%, when the velocity itself is about 0.5 m/s and the exposure time is 2 min; the uncertainty X c affecting the rating of the rotating element of the currentmeter is about ±1%, when the flow velocity is about 0.5 m/s; the uncertainty X B affecting the measurement of B is about ±1%; the uncertainty X d affecting the measurement of h i is about ±1%; the uncertainty X p in the estimation of the mean velocity along each vertical segment is about ±5% when at least 5 point measurements are collected; the uncertainty X A in the estimation of the mean velocity over the cross section is about ±5% when the number of vertical segments, m, is about 20. The uncertainty affecting Q (x,t) can be obtained by integrating the individual sources of uncertainty above (Herschy (1970(Herschy ( , 1975; European ISO EN Rule 748, 1997). In particular under the assumptions that: i) the current meter is operated in ideal conditions, without any systematic uncertainty and in absence of significant wind and unsteady flow; ii) the errors are independent and normally distributed and iii) the number of vertical segments, is at least 20, with an even distribution of discharge along the river cross subsections, the uncertainty affecting Q (x,t), at the 95% confidence level, can be computed as: (5) Thus, it can be concluded that any river discharge measurement that is used to calibrate a rating curve is affected by an uncertainty of about 5% of Q (x,t) at the 95% confidence level. This outcome matches the indications reported in Leonard et al. (2000) and Shmidt (2002). It follows that ε 1 (Q(x, t)) is a Gaussian random variable with zero mean and standard deviation equal to 0.027Q(x,t). Rating curve uncertainty This study assumes that in the operational practice no information is available to infer the sign of the errors ε 2.1 (Q(x, t)), ε 2.2 (Q(x, t)) and ε 2.3 (Q(x, t)). In fact, even though one could infer the sign of the error induced by unsteady flow and roughness changes, the necessary information is often not available. Moreover, it is unlikely to introduce any reliable assumption about the sign of the errors induced by interpolation/extrapolation. The worst situation is obtained when the signs are in agreement; in fact, if the errors have opposite signs there is error compensation. Therefore, in order to follow a conservative approach, these errors are assumed to have an absolute additive structure, so that the absolute error affecting Q(x,t), which is induced by rating curve uncertainty, |ε 2 (Q(x, t))|, can be obtained by: This allows one to deterministically obtain a safe estimate of the absolute error induced by rating curve uncertainty via numerical simulation (see below). However, given that no information is available in operational practice to infer the error sign, ε 2 (Q(x, t)) is assumed to be a binary random variable which can assume the values +|ε 2 (Q(x, t))| and As mentioned above, in order to quantify |ε 2 (Q(x, t))| numerical experiments were performed using the 1-D model code HEC-RAS (Hydrologic Engineering Center, 2001). HEC-RAS solves the 1-D differential equations for unsteady open channel flow (De Saint Venant equations), using the finite difference method and a four point implicit method (box scheme;Preismann, 1961). HEC-RAS is widely used for hydraulic modelling (e.g. Pappenberger et al., 2006;Young et al., 2009;Di Baldassarre et al., 2009) and a number of studies have showed that HEC-RAS is often suitable for providing a reliable reproduction of the flood propagation in natural rivers and streams (e.g. Horritt and Bates, 2002;Castellarin et al. 2009). The numerical study focused on a 330 km reach of the Po River from Isola Sant'Antonio to Pontelagoscuro (see Fig. 1). The Po River is the longest river in Italy (the total length is about 652 km) and it drains a large part of northern Italy, with a contributing area at the closure section of about 70 000 km 2 . The geometry of river reach was described by 275 cross sections surveyed in 2005. Figure 2 shows the elevation of the river bed and the levee system. The main geometric characteristics of the reach are summarised in Table 1 particular, the Manning roughness coefficient was allowed to vary between 0.01 and 0.06 m −1/3 s for the main channel and between 0.05 and 0.15 m −1/3 s for the floodplain. Several simulations of the 2000 flood event were carried out by using: the flow hydrograph observed at Isola S. Antonio as upstream boundary condition (Fig. 3), the flow hydrograph recorded in the major tributaries as lateral inflow and the stage hydrograph observed at Pontelagoscuro as downstream boundary condition. To check the model reliability, the water stages observed in two internal cross sections (Casalmaggiore and Boretto, Fig. 2) were compared to simulated ones. The best performance was obtained by using Manning's values equal to 0.03 m −1/3 s for the main channel and 0.09 m −1/3 s for the floodplain. These values agree with what is recommended by the literature. In particular, Chow et al. (1988) suggest for this type of rivers Manning coefficients around 0.03-0.04 m −1/3 s for main channel and around 0.08-0.12 m −1/3 s and floodplain. Figure 4 shows the simulated and observed stage hydrographs in the two internal cross sections. By analysing Fig. 4, one can observe that the model provides a satisfactory reproduction of the hydraulic behaviour of the reach under study, although it does not capture irregularities on the rising limb. These irregularities are mainly due to the presence of some two-dimensional (2-D) features, such as failures of minor levees, which cannot be represented using a 1-D model. In order to inspect the uncertainty induced by an imperfect estimation of the rating curve, the study focused on 17 cross sections placed near the internal cross section of Boretto. For each of them the 1-D model was used to estimate the steady flow rating curve for river discharges ranging from 1000 to 12 000 m 3 /s. It is relevant to note that in the river reach under study there is in practice a one-to-one correspondence between the water stage and the river discharge in steady flow conditions, in view of the negligible role played by the downstream disturbances and boundary condition. Uncertainty induced by interpolating and extrapolating the rating curve The interpolation and extrapolation error |ε 2.1 (Q(x, t))| was estimated as follows. For each cross section, a total of 11 (Q (x,t); h(x,t)) points corresponding to river discharge values in the range 1000-6000 m 3 /s, by steps of 500 m 3 /s, were obtained through steady flow simulations. Then, rating curves were estimated using the two Eqs. (2) and (3) to interpolate these (Q (x,t); h(x,t)) points. This methodology reflects the fact that rating curves are usually derived by using river discharge measurements related to ordinary flow conditions (for obvious practical reasons) and then extrapolated to estimate river discharge for high flow conditions also. Specifically, in the river reach under study, river discharges in the range 1000-6000 m 3 /s correspond to ordinary flow conditions (from low flow values to ordinary floods), while river discharges in the range 6500-12 000 m 3 /s correspond to exceptional flow conditions (from about 1-in-5 to 1-in-100 year floods; e.g. Maione et al., 2003). Finally, for each cross section, errors were computed by comparing the steady flow rating curve to the estimated one both in the ranges 1000-6000 m 3 /s, interpolation error, and 6500-12 000 m 3 /s, extrapolation error (e.g. Fig. 5). The error anal- ysis pointed out that the polynomial function (3) performs slightly better than the power function (2). Specifically, using the polynomial function (3) as rating curve and assuming that the percentage errors with respect to Q(x,t) are Gaussian, the average |ε 2.1 (Q(x, t))| along the river reach was found to be equal to 1.2% and 11.5% of Q(x,t), at the 95% confidence level, for the interpolation and extrapolation error, respectively; (whereas, using the power function (2) as rating curve, the average |ε 2.1 (Q(x, t))| along the river reach was found to be equal to 1.7% and 13.8% of Q(x,t). Table 2 reports the percentage values of this source of uncertainty for each considered Q(x,t) value. By analysing Table 2 one can observe that, as expected, errors increase for increasing river discharge. Uncertainty induced by the presence of unsteady flow conditions It is well known that in unsteady flow conditions there is not a one-to-one relationship between the river stage and the river discharge (e.g. Dottori et al., 2009). Actually, during a flood the same river stage corresponds to different river discharges in the two limbs of the hydrograph, the higher one occurring in the raising limb. In order to assess the magnitude of the error that can be induced by the presence of unsteady flow, the model was used to simulate the 2000 flood event and estimate the unsteady flow rating curve (Fig. 6). Then, for each cross section, river discharge values simulated by the model were compared to the corresponding values estimated by using the steady flow rating curve (Fig. 6). For each value of Q(x,t) in the range 1000-12 000 m 3 /s, with step of 500 m 3 /s, and each cross section the largest absolute errors were taken in order to obtain a one-to-one relationship Table 2. Average values, expressed as percentage of Q(x,t), of the three single sources of rating curve uncertainty (|ε 2.1 |, |ε 2.2 |, |ε 2.3 |) for the considered discrete values of the river discharge; upper and lower 95% confidence band for Q(x,t), averaged over the river reach, along with the average value of ε * (Q(x, t)) expressed as percentage of Q(x,t). Note that ε 1 (Q(x,t)) is uniformly equal to 4.4% of the observed discharge at the 95% confidence level. between |ε 2.2 (Q(x, t))| and Q(x,t). By assuming that the percentage (with respect to Q(x,t)) |ε 2.2 (Q(x, t))| are Gaussian, the average |ε 2.2 (Q(x, t))| along the river reach was found to be equal to 9.8% of Q(x,t), at the 95% confidence level. Table 2 reports the percentage values of this source of uncertainty for each considered Q(x,t) value. By analysing Table 2 one can observe that errors are particularly high for intermediate river discharge values. Uncertainty induced by seasonal changes of the river roughness Floodplain roughness depends on the state of the vegetation, which is affected by seasonal variations. This causes changes in the rating curve and therefore may affect the river discharge estimation (Franchini et al., 1999). The Po River is characterised by floodplains largely abandoned or covered by broad leaved woods. Figure 7 shows two rating curves for one cross section along the Po River calculated by the 1-D model. They refer to values of the Manning floodplain coefficient equal to 0.09 m −1/3 s and 0.12 m −1/3 s. The former is the calibrated value, which refers to October (when the 2000 flood event occurred). The latter is a value that might be representative of Spring conditions, according to Chow (1988). For each value of Q(x,t) in the range 1000-12 000 m 3 /s, with step of 500 m 3 /s, and each cross section, the error ε 2.3 (Q(x, t)) was computed. By assuming that the percentage (with respect to Q(x,t))ε 2.3 (Q(x, t)) are Gaussian the average of |ε 2.3 (Q(x, t))| was found to be equal to 4.9% of Q(x,t), at the 95% confidence level. Table 2 reports the percentage values of this source of uncertainty for each considered Q(x,t) value. By analysing Table 2 one can observe that, as expected, this source of error increases for increasing river discharge. Computation of the total rating curve uncertainty The total rating curve uncertainty was evaluated by summing up, through Eq. (6), the errors induced by: 1) interpolation and extrapolation of river discharge measurements; 2) presence of unsteady flow; 3) seasonal variation of roughness. Figure 8 reports the progress of |ε 2 (Q(x, t))| along the river reach for different values of Q(x,t). Figure 8 clearly shows that errors increase, when the river discharge increases. In percentage terms, |ε 2 (Q(x, t))| varies from 1.8% to 38.4% of Q(x,t), with a mean value of 21.2% and a standard deviation of 10.8%. Computation of the global uncertainty Under the aforementioned assumption of independence of ε 1 (Q(x, t)) and ε 2 (Q(x, t)), the global error affecting Q(x,t), ε(Q(x, t)), at the 95% confidence level, can be computed according to Eq. (4). It has to be taken into account that ε 1 (Q(x, t)) is a Gaussian random variable with zero mean and standard deviation equal to 0.027Q(x,t) (see Sect. 2.2) while ε 2 (Q(x, t)) is a binary random variable taking the values +|ε 2 (Q(x, t))| and − |ε 2 (Q(x, t))| with equal probability. Its absolute value was computed above and is visualised in Fig. 8 for discrete values of x and Q. Therefore, the 95% confidence bands of an assigned Q(x,t) value can be computed with the relationship: Q (x, t) ± ε * (Q(x, t)) where α is the 0.95 quantile for the standard normal distribution (equal to 1.645) and ε * (Q(x, t)) is the width of the 95% upper (and lower) confidence band. Table 2 shows the aver-age value, along the river reach, of the upper and lower confidence band for the considered discrete values of the river discharge, along with the average value of ε * (Q(x, t)), expressed as percentage of Q(x,t). By analysing Table 2 one can observe that, in the Po River reach under study, the estimation of river discharge using the rating curve method is affected by an increasing error for increasing river discharge values. At the 95% confidence level the error ranges from 6.2% to 42.8% of Q(x,t), with an average value of 25.6%. Discussion The error models used above to compute ε(Q(x, t)) was derived by introducing a series of assumptions. The most important ones are summarised here below: 1. the uncertainty induced by imperfect measurement of the river stage is negligible; 2. the geometry of the river cross sections is stationary in time; 3. ε(Q(x, t)) can be obtained by adding ε 1 (Q(x, t)) and ε 2 (Q(x, t)), which are independent; 4. the uncertainties affecting Q (x,t) are independent and systematic errors are excluded; 5. ε 1 (Q(x, t)) is a Gaussian random variable; 6. ε 2 (Q(x, t)) is a binary random variable which can assume the values +|ε 2 (Q(x, t))| and − |ε 2 (Q(x, t))| with equal probability. It can be computed accordingly to an absolute additive error model (Eq. 4). Assumptions 3) and 6) are conservative and may lead to an overestimation of the uncertainty. In order to better inspect this issue, Table 2 reports the amounts of |ε 2.1 |, |ε 2.2 | and |ε 2.3 | averaged over the river reach, expressed as percentage of Q(x,t). Given that ε 1 (Q(x, t)) is equal to 5.3% at the 95% confidence level, one can see that it is negligible with respect to ε 2 (Q(x, t)) and therefore the simplifying assumption 3) is scarcely effective on the results. The numerical analysis showed that the uncertainty induced by the extrapolation of the rating curve is dominating the other errors in high flow conditions, therefore making assumption 6) scarcely effective as well. In fact, previous contributions in hydrology (e.g. Rantz et al., 1982) recommend not extrapolating rating curves beyond a certain range. Nevertheless several hydrological applications are unavoidably based on flood flow observations (e.g. calibration and validation of rainfall-runoff models, flood frequency analysis, boundary conditions of flood inundation models) and therefore one needs to extrapolate the rating curve beyond the measurement range (Pappenberger et al., 2006). Given that the river reach under study is characterised by a very gentle slope (Table 1) the uncertainty induced by the presence of unsteady flow is also relevant in this test site (Table 2). Nevertheless, it is important to note that this latter source of error can be reduced by applying formulas proposed by scientific literature to approximate unsteady flow rating curves (e.g. Dottori et al., 2009). Finally, errors in the river flow measurements used to construct the rating curve and errors due to seasonal changes of roughness are not as significant. Conclusions Hydrological models often disregard the fact that river flow data are affected by a significant uncertainty. One of the main reasons is that modellers are often not able to quantitatively assess the reliability of rainfall or river discharge observations. This paper proposed a methodology to quantify the uncertainty that one may expect when river discharge observations are derived by applying the rating curve method. The methodology was applied to a reach of the Po River (Italy) by means of a 1-D hydraulic model. The overall error affecting river discharge observations averaged over the river reach under study was found to range from 6.2% to 42.8%, at the 95% confidence level, with an average value of 25.6%. Hence, errors in river discharge observations are significant and can heavily impact the output of hydrological and hydraulic studies. The results of the study are unavoidably associated with the considered test site. Nevertheless, it is important to note that the conditions of the Po River can be considered representative for many alluvial rivers in Europe. Also, the framework proposed in this paper can be easily applied to different river reaches.
6,669.6
2009-06-25T00:00:00.000
[ "Engineering", "Environmental Science" ]
No need for extensive artifact rejection for ICA A multi-study evaluation on stationary and mobile EEG datasets Objective . Electroencephalography (EEG) studies increasingly make use of more ecologically valid experimental protocols involving mobile participants that actively engage with their environment leading to increased artifacts in the recorded data (MoBI; Gramann et al., 2011). When analyzing EEG data, especially in the mobile context, removing samples regarded as artifactual is a common approach before computing independent component analysis (ICA). Automatic tools for this exist, such as the automatic sample rejection of the AMICA algorithm (Palmer et al., 2011), but the impact of both movement intensity and the automatic sample rejection has not been systematically evaluated yet. Approach . We computed AMICA decompositions on eight datasets from six open-access studies with varying degrees of movement intensities using increasingly conservative sample rejection criteria. We evaluated the subsequent decomposition quality in terms of the component mutual information, the amount of brain, muscle, and “other” components, the residual variance of the brain components, and an exemplary signal-to-noise ratio. Main results . We found that increasing movements of participants led to decreasing decomposition quality for individual datasets but not as a general trend across all movement intensities. The cleaning strength had less impact on decomposition results than anticipated, and moderate cleaning of the data resulted in the best decompositions. Significance . Our results indicate that the AMICA algorithm is very robust even with limited data cleaning. Moderate amounts of cleaning such as 5 to 10 iterations of the AMICA sample rejection with 3 standard deviations as the threshold will likely improve the decomposition of most datasets, irrespective of the movement intensity. Introduction Removing artifacts from electrophysiological data in the time-domain can be a task as time-consuming as it is important. Rejecting periods of "bad" data that should not be taken into account for further downstream analysis has become a staple in electroencephalography (EEG) analysis from the outset of the method. This includes the rejection of bad epochs when computing event-related measures, but also the rejection of bad samples before running an independent component analysis (ICA; Bell & Sejnowski, 1995;Hyvärinen et al., 2001). ICA decomposes the acquired sensor data into components that can subsequently be interpreted regarding the underlying physiological processes (e.g. brain, eyes, muscle, other), and as a common preprocessing step before running ICA, bad samples are removed from the data. Similar to applying a high-pass filter before ICA and copying the decomposition results back to unfiltered data Winkler et al., 2015), the ICA results computed on a dataset that had bad time points removed can be applied to the complete uncleaned data in the end (e.g. Gramann et al., 2021;Jacobsen et al., 2021) to retain as much data as possible for downstream analyses. Although time-domain cleaning is regularly used, to our knowledge no study has yet investigated the effect of time-domain cleaning on ICA decomposition in depth. The present study addresses this issue and systematically investigates the effect of time-domain cleaning on the resultant ICA decomposition while taking different experimental protocols into account that increase in mobility from stationary to Mobile Brain/Body Imaging (MoBI; Gramann et al., 2011Gramann et al., , 2014Makeig et al., 2009) setups. Cleaning of data from mobile experiments is a complex problem While data cleaning in the time-domain is important for stationary, seated, experiments, it is even more relevant for experiments collecting data from mobile participants. These mobile EEG (Debener et al., 2012) and MoBI (Jungnickel et al., 2019) studies are gaining popularity. Especially with analytical options to remove non-brain activity from high-density EEG data, this approach allows for imaging the human brain in its natural habitat -in participants moving in and actively engaging with their environment. This increased mobility naturally comes with increased non-brain activity, traditionally considered artifacts, contributing to the recorded signal. On the one hand, more physiological activity stemming from the eyes and muscles will be present, on the other hand, also electrical and mechanical artifacts stemming from additional devices, cable sway, or electrode shifts on the scalp will be more prevalent in mobile EEG data (Gramann et al., 2011;Gwin et al., 2010;Jungnickel & Gramann, 2016). In traditional seated experimental protocols, all these contributions to the recorded signal would be candidates for removal. However, as eye and muscle activity can be found throughout the recordings in stationary as well as mobile EEG data, and they typically can be removed with ICA, it is not always clear or easy to decide which time points to remove during time-domain cleaning. Considering mechanical artifacts, large transient spikes from electrode shifts can be detected comparably easily but cable sway, for example, although potentially high in amplitude, is not a clear case for removal since it might be present in the entire experiment and/or especially in times that are interesting in the experimental paradigm. Taken together, mobile EEG protocols complicate the time-domain cleaning of electrophysiological experiments and traditional heuristics for data cleaning can not always be applied. Different automatic cleaning options -different challenges Removing samples from data manually is a sub-optimal approach, as it is both time-consuming and subjective. With the varying experience of the persons cleaning the data, the resulting cleaning strategy will also vary. Even within the same person, different mental states or varying noise levels in different datasets may alter the cleaning procedure. To ensure a reliable, repeatable, and transparent cleaning, it is thus preferred to make use of automatic cleaning algorithms. Several such options exist, ranging from methods based on simple amplitude criteria to the identification of artifactual time periods based on the spectral characteristics to more complex approaches that identify artifactual time periods based on artifact subspaces such as the EEGLAB (Delorme & Makeig, 2004) clean_rawdata function, which uses Artifact Subspace Reconstruction (ASR; Kothe & Jung, 2015). These methods do have their challenges, though. Identifying bad time points by their amplitude alone will either be a very lax measure or it will also remove all periods containing eye blinks since these are high-amplitude signals. Removing eye-blinks before ICA decomposition, however, is not desired, as ICA can typically remove these more reliably and preserve the respective time points for downstream analyses. Spectral measures will be prone to removing periods with muscle activity since these are usually detected by having increased high-frequency broad-band power (Onton & Makeig, 2006). But especially in mobile experiments, removing time periods with muscle activity would result in excessive cleaning, and the computed ICA decomposition would not be readily applicable to the entire dataset since it was not informed by time points containing muscle activity. And while the cleaning threshold of ASR can be adjusted to remove mainly large transient spikes, ASR is very sensitive to this threshold (Chang et al., 2020), and especially for mobile data, it does not always find a suitable baseline by itself. ASR thus requires a specifically recorded baseline and sometimes different cleaning thresholds for different movement modalities and even different datasets within the same modality, which renders it unsuitable for automatic data cleaning as targeted in this study. AMICA sample rejection The Adaptive Mixture ICA (AMICA; Palmer et al., 2011), currently one of the most powerful ICA algorithms (Delorme et al., 2012), includes an inbuilt function to reject bad samples that might not be well-utilized by researchers working with the algorithm: AMICA can reject bad samples based on their log-likelihood while computing the decomposition model. The log-likelihood is an objective criterion corresponding to the algorithm's estimate of the model fit, effectively leading to the rejection of samples AMICA cannot easily account for. Hence, unlike other cleaning methods, this option will only remove those kinds of artifacts that negatively affect the decomposition and retain those that can be decomposed and removed with ICA. This is done in an iterative fashion: First, several steps of the model estimation are performed, then samples are rejected based on the difference of their log-likelihood from the mean in standard deviations (SDs), then the model is estimated for several steps again before the next rejection, and so on. The start of the rejection, the number of rejection iterations, the SD threshold as well as the number of model computation steps between each rejection can be set in the AMICA computation parameters. This artifact rejection approach is model-driven and allows users to automatically remove time-domain data to improve the decomposition. When applied from the EEGLAB user interface (AMICA plugin v1.6.1), this is disabled by default, but when opening the rejection sub-interface, it is enabled with 5 iterations with 3 SDs, starting after the first AMICA step, with one step between each iteration. When the runamica15 function is applied directly from the command line, the cleaning is disabled by default, but when enabled, it rejects three times with 3SDs, starting after the second AMICA step, with 3 steps between each iteration. As a consequence, different settings will impact whether and how AMICA uses time-domain cleaning during the decomposition. However, while previous evaluations have shown that AMICA is one of the currently best algorithms for EEG decomposition (Delorme et al., 2012), the impact of the integrated time-domain cleaning procedure has not been evaluated yet. Current study In this study, we thus investigate the impact of automatic sample rejection on the quality of the AMICA decomposition of data from six experiments with different levels of mobility. To this end, we varied the cleaning intensity in terms of the number of cleaning iterations as well as the rejection threshold. As measures of decomposition quality, we used the number of brain, muscle, eye, and unspecified components, the residual variance of brain components, and the component mutual information. Additionally, we examined the signal-to-noise ratio in one standing and one mobile condition of the same experiment. We hypothesized that 1) increasing mobility affects the decomposition negatively, 2) cleaning affects the decomposition positively, and 3) an interaction exists, where experiments with more movement require more cleaning to improve their decomposition quality. We had no hypothesis as to what the optimal amount of cleaning should be. Based on the empirical evidence from this study, we formulate recommendations for using sample rejection with AMICA. Datasets For a reliable estimate of the effect of time-domain cleaning on the quality of the ICA decomposition for datasets from stationary as well as mobile EEG protocols, we included open access EEG datasets with a wide range of movement conditions in this study. This resulted in eight datasets from six studies containing standard seated protocols but also a gait protocol, arm reaching, and irregular movement protocols. We categorized the datasets into the four groups of low, low-to-medium, medium-to-high, and high movement intensity as laid out in the individual dataset descriptions. We used datasets that used at least 60 EEG channels (not including EOG), had a sampling rate of at least 250 Hz, and either contained channel locations or standard 10-20 system electrode layouts. Representation of different EEG setups was ensured by including no more than two datasets from one lab. We manually subsampled channels where necessary as described in the section Preparation. The following datasets were used: Video Game: This dataset is available at https://doi.org/10.18112/openneuro.ds003517.v1.1.0 (Cavanagh & Castellanos, 2021) and contains data of 17 participants (6 female and 11 male, mean age = 20.94 years, SD = 5.02 years). Participants were sitting while playing a video game using a gamepad. Data was recorded with a 500 Hz sampling rate using 64 electrodes (Brain Products GmbH, Gilching, Germany), and filtered with a high-pass filter of .01 Hz and a low-pass filter of 100 Hz. Channels were manually downsampled by selecting 58 channels of only scalp electrodes. This dataset was categorized as having low movement intensity. Face Processing: This dataset is available at https://doi.org/10.18112/openneuro.ds002718.v1.0.5 (Wakeman & Henson, 2021) and contains data of 19 participants (8 female and 11 male, age range 23-37 years). Participants were seated in front of a screen and exposed to images of faces. Data was recorded with a 1100 Hz sampling rate using 70 electrodes and a 350 Hz low-pass filter was applied. Channels were manually downsampled by selecting 65 channels that were closest to corresponding electrodes from a 10-20 layout of scalp electrodes. This dataset was categorized as having low movement intensity. Spot Rotation (stationary/mobile): This dataset is available at https://doi.org/10.14279/depositonce-10493 and contains data of 19 participants (10 female and 9 male, aged 20-46 years, mean age = 30.3 years). The experiment consisted of a rotation on the spot, which either happened in a virtual reality environment with physical rotation or in the same environment on a two-dimensional monitor using a joystick to rotate the view. Participants were standing in front of the computer screen in the stationary condition. The data was split into the two conditions of joystick rotation (stationary) and physical rotation (mobile) for the purpose of this study. Data was recorded with a 1000 Hz sampling rate using 157 electrodes (129 on the scalp in a custom equidistant layout, 28 around the neck in a custom neckband) with the BrainAmp Move System (Brain Products GmbH, Gilching, Germany). Channels were manually downsampled by selecting 60 channels that were closest to corresponding electrodes from a 10-20 layout of scalp electrodes. This dataset was categorized as having low-to-medium (stationary) and medium-to-high (mobile) movement intensity. Beamwalking (stationary/mobile): This dataset is available at https://doi.org/10.18112/openneuro.ds003739.v1.0.2 (Peterson & Ferris, 2021) and contains data of 29 participants (15 female and 14 male, mean age = 22.5 years, SD = 4.8 years). Participants either stood or walked on a balance beam and were exposed to sensorimotor perturbations. Perturbations were either virtual-reality-induced visual field rotations or side-to-side waist pulls. Because of the different degrees of movement, the data was split according to the two conditions: stationary (standing) and mobile (walking). Data was recorded at a 512 Hz sampling rate using 136 electrodes (BioSemi Active II, BioSemi, Amsterdam, The Netherlands). Channels were manually downsampled by selecting 61 channels that were closest to corresponding electrodes from a 10-20 layout of scalp electrodes. This dataset was categorized as having low-to-medium (stationary) and high (mobile) movement intensity. Prediction Error: This dataset is available at https://doi.org/10.18112/openneuro.ds003846.v1.0.1 and contains data of 20 participants (12 female, mean age = 26.7 years, SD = 3.6 years) of which one was removed by the authors due to data recording error. Participants were seated at a table and equipped with an HMD. The task consisted in reaching for virtual cubes that appeared in front of participants on the table. Participants moved their arm and upper torso to reach the virtual goal on the table. Data was recorded with a 1000 Hz sampling rate using 64 electrodes (Brain Products GmbH, Gilching, Germany). Channels were manually downsampled by selecting 58 channels of only scalp electrodes. This dataset was categorized as having medium-to-high movement intensity. Auditory Gait: This dataset is available at https://doi.org/10.1038/s41597-019-0223-2 (Wagner et al., 2019) and contains data of 20 participants (9 females and 11 males, aged 22-35 years, mean age = 29.1 years, SD = 2.7 years). Participants had to walk on a treadmill and synchronize their steps to a regular auditory pacing stream that included infrequent, sudden shifts in tempo. Data was recorded with a 512 Hz sampling rate using 108 electrodes with seven 16-channel amplifiers (g.tec GmbH, Graz, Austria), high pass filtered >0.1 Hz, low pass filtered <256 Hz and a notch filter was applied at 50 Hz to remove power line noise. Channels were manually downsampled by selecting 61 channels that were closest to corresponding electrodes from a 10-20 layout of scalp electrodes. This dataset was categorized as having high movement intensity. Data processing All data was processed in an automated fashion with identical preprocessing steps as displayed in figure 1. The main processing steps can be summarized under pre-processing, AMICA with sample rejection, and ICA post-processing, followed by the computation of quality measures to evaluate the decomposition. Preparation All datasets were first loaded into EEGLAB, and if a study contained data of two different conditions these were split and subsequently treated separately. We then manually selected channels to reduce the number of channels to a range of 58 to 65, excluding EOG or neck channel locations, and matched the remaining channels to the 10-20 layout as close as possible if the original layout was equidistant (see section Datasets). We did not subsample all studies to exactly 58 channels to allow an evenly distributed whole head coverage in all datasets. Full comparability of the channel layout between studies was not given, nor was it intended since the data were recorded in different labs with different devices. In addition, a previous investigation revealed a ceiling effect in obtaining brain ICs with an increasing number of electrodes used for ICA decomposition . A doubling of channels from 64 to 128 resulted in only 3 to 4 more brain ICs while differences in the number of brain ICs due to different movement intensities were more substantial with around 5 to 6 brain ICs. As a consequence, we expected our differences in channel count to have minimal impact on the computed quality measures. After channel reduction, all datasets were downsampled to 250 Hz and reduced to a length of 10 minutes (150,000 samples), ensuring there was the same amount of data available for all datasets. All data were subsequently processed using the BeMoBIL pipeline with identical parameters except for varying time-domain cleaning types and strengths. Zapline-plus The Auditory Gait dataset had a notch filter applied before uploading. All other datasets were processed with Zapline-plus (de Cheveigné, 2020; Klug & Kloosterman, 2022) to remove line noise and other frequency-specific noise peaks. Zapline (de Cheveigné, 2020) removes noise by splitting the data into an originally clean (data A) and a noisy part (data B) by filtering the data once with a notch filter (A) and once with the inverse of this notch filter (B). It then uses a spatial filter to remove noise components in the noisy part (B) to get a cleaned version of that part (B'). Finally, the two clean parts (A and B') are added back together to result in a cleaned dataset with full rank and full spectrum except for the noise. Zapline-plus is a wrapper for Zapline that chunks the data to improve the cleaning and adapts the cleaning intensity automatically, thus maximizing the cleaning performance while ensuring minimal negative impact. We used default parameters in all datasets containing mobile data (Spot Rotation, Prediction Error, Beamwalking), but limited the noise frequency detector to line noise for the Face Processing and Video Game datasets to avoid the removal of minor spectral peaks in stationary datasets that were not expected to be influenced by additional electronic equipment or mechanical artifacts. Channel cleaning and interpolation We detected bad channels using the clean_rawdata plugin of EEGLAB in an iterative way. We did not use the flatline_crit, the line_noise_crit, and samples were not rejected, nor was ASR applied. The only used criteria to detect bad channels was the chancorr_crit criterion, which interpolates a channel based on a random sample consensus (RANSAC) of all channels and then computes the correlation of the channel with its own interpolation in windows of 5 s duration. If this correlation is below the threshold more than a specified fraction of the time (50% in our case), it is determined to be bad. Since the function has a random component, it does not necessarily result in a stable rejection choice, which is why this detection was repeated ten times, and only channels that were flagged as bad more than 50% of the time were finally rejected. Removed channels were then interpolated and the data was subsequently re-referenced to the average using the full rank average reference plugin of EEGLAB, which preserves the data rank while re-referencing. High-pass filtering High-pass filtering has a positive effect on the ICA decomposition quality and is especially important in mobile studies . For a dataset with 64 channels, a filter of 0.5 to 1.5 Hz cutoff resulted in the best decomposition for both stationary and mobile conditions , which is why we chose a cutoff of 1 Hz in this study. We specified the filter manually as recommended (Widmann et al., 2015) and used the same filter specifications as in : a zero-phase Hamming window FIR-filter (EEGLAB firfilt plugin, v1.6.2) with an order of 1650 and a passband-edge of 1.25 Hz, resulting in a transition bandwidth of 0.5 Hz and a cutoff frequency of 1 Hz. Independent component analysis with sample rejection All final datasets were decomposed using AMICA with different numbers of sample rejection iterations and different rejection thresholds to compare the results of the decomposition for the eight datasets. We used one model and ran AMICA for 2000 iterations. Since we interpolated channels previously we also let the algorithm perform a principal component analysis rank reduction to the number of channels minus the number of interpolated channels. As we used the full rank average reference, we did not subtract an additional rank for this. All computations were performed using four threads on machines with identical hardware, an AMD Ryzen 1700 CPU with 32GB of DDR4 RAM. In this step, we investigated the effect of different cleaning intensities using the AMICA sample rejection algorithm. All rejection was started after the runamica15 default 2 iterations, with the default 3 iterations between rejections. We repeated the AMICA computation process either without sample rejection, or with 1, 3, 5, 7, or 10 iterations using 3 SDs as the threshold, and additionally with 10, 15, and 20 iterations using 2.8 SDs, and last with 20 iterations using 2.6 SDs. Dipole fitting An equivalent dipole model was computed for each resulting independent component (IC) using the DIPFIT plugin for EEGLAB with the 3-layer boundary element model of the MNI brain (Montreal Neurological Institute, Montreal, QC, Canada). The dipole model includes an estimate of IC topography variance which is not explained by the model (residual variance, RV). For datasets that had individually measured electrode locations of an equidistant channel layout, the locations were warped (rotated and rescaled) to fit the head model. Transfer of ICA to unfiltered data One of the goals of this study was to investigate the effect of time-domain cleaning on the component mutual information (MI) and the signal-to-noise ratio (SNR) when applied to the full dataset. To this end, the resulting AMICA decomposition and dipole models were copied back to the preprocessed dataset from section Channel cleaning and interpolation (line noise removed, channels interpolated, re-referenced to the average, but no high-pass filter and no time-domain cleaning). Independent component classification using ICLabel In order to categorize the ICs according to their likely functional origin, we applied the ICLabel algorithm (Pion-Tonachini et al., 2019). ICLabel is a classifier trained on a large database of expert labelings of ICs that classifies ICs into brain, eye, muscle, and heart sources as well as channel and line noise artifacts and a category of other, unclear, sources. As it was shown that the 'lite' classifier worked better than the 'default' one for muscle ICs , we used the 'lite' classifier in this study. We used the majority vote to determine the final class, meaning the IC received the label with the highest probability. Quality Measures To measure the impact of the cleaning on the ICA decomposition, we used several measures that addressed both mathematical and practical considerations of the ICA decomposition: i) The mutual information (MI) of the components after applying the ICA solution to the complete dataset. The MI is essentially the mathematical description of how well the ICA can decompose the data, as the ICA minimizes component MI. ii) The number of ICs categorized as stemming from brain, muscle, and other sources defined by ICLabel, as especially in MoBI research not only brain, but all physiological sources can be of interest to the experimental analysis. iii) The mean RV of brain ICs, as this can be considered a measure of the physiological plausibility of the IC (Delorme et al., 2012). iv) The combination of the above measures as the ratio of the number of brain ICs by the mean brain RV (higher indicates better decomposition), v) An exemplary computation of the signal-to-noise ratio (SNR) on the two Spot Rotation datasets (physical rotation in VR and 2D monitor rotation), using the same measures as in . For this, we removed all non-brain ICs in the final dataset and computed event-related potentials (ERPs) of the trial onsets at the electrode closest to the Pz electrode in the 10-20 system. On average, 30.58 (SD = 7.31) epochs were used per subject and condition. This comparably low number of epochs is caused by the previous reduction in dataset length to 10 minutes, and the results of this approach should be interpreted with care. The signal was defined as the mean amplitude from 250 ms to 450 ms and the standard deviation in the 500 ms pre-stimulus interval was used as a measure of the noise (Debener et al., 2012). Results As the effects were either clearly absent or a reflection of the arbitrarily chosen steps in cleaning intensity, we did not perform a statistical analysis. We anticipated three effects regarding the time-domain cleaning and movement intensity on the decomposition quality of the ICA. First, we expected a higher movement intensity to decrease the decomposition quality. Second, higher cleaning intensity should remove more artifacts and therefore result in a better decomposition quality. Finally, we expected an interaction of movement intensity and cleaning intensity such that more movement would require more cleaning to reach a better decomposition. The effect of movement intensity on the decomposition quality The most fundamental effect of changing the hyperparameters of AMICA sample rejection is the amount of data that is rejected. If more movement results in more problematic artifacts (those that AMICA can not easily include in its model e.g. due to nonlinearities), more movement should also result in more rejected samples. As can be seen in figure 2a, the dataset with the lowest amount of rejection was indeed a set with low movement intensity, namely the Face Processing dataset. Yet, the dataset with the second lowest removal was a high movement intensity set (Auditory Gait). Furthermore, the most data was rejected in the Spot Rotation (stationary) set, which was a set with medium-to-low movement intensity. Overall, there was no discernible trend as to whether movement intensity affected the amount of data that was rejected by the AMICA. Figure 2b shows the results for the MI, where the AMICA reached the lowest MI, and thereby its best decomposition from a mathematical point of view, in the Auditory Gait and Spot Rotation (mobile) datasets. The Auditory Gait and Spot Rotation (mobile) datasets were categorized as high and medium-to-high movement intensity, respectively. Even when comparing movement conditions of the same studies, no clear trend was identifiable. While the results for the Beamwalking study yielded a higher MI in its mobile condition, the Spot Rotation study showed the opposite trend. . Shaded areas depict the standard error of the mean (SE). The numbers on the abscissa refer to the number of iterations of the sample rejection, with a default of 3 SDs as the rejection threshold. The numbers in brackets on the abscissa refer to the rejection threshold in SDs when deviating from the default. "No clean" refers to no sample rejection being applied when computing AMICA. The colors denote the movement intensities: yellow -low, orange -low-to-medium, red -medium-to-high, violet -high. The results of the number of resulting brain and muscle ICs can be seen in figure 3a. The highest amounts of brain ICs were found in the datasets Face Processing, Beamwalking (stationary), and Auditory Gait, with the last dataset having high movement intensity. The group of datasets with the lowest number of brain ICs (with around 6 fewer than the highest) consisted of both medium-to-high movement intensities but also included the low movement intensity Video Game study. The number of muscle ICs also did not show a clear trend with movement intensity: one of the datasets with the least amount of muscle ICs was the high movement intensity set Beamwalking (mobile). When looking at the within-study differences, the stationary conditions in both the Spot Rotation and the Beamwalking study showed around 2-3 brain ICs more than their respective mobile conditions. This was not the case for the number of muscle ICs, however, since both conditions of the Beamwalking study revealed almost identical amounts of muscle ICs and the Spot Rotation study even exhibited around 1-2 more muscle ICs in its stationary condition as compared to its mobile counterpart. Figure 3: Results for the brain (a) and muscle (b) ICLabel classifications, residual variance (RV; c) and the ratio of the number of brain ICs to their mean RV (d). Shaded areas depict the standard error of the mean (SE). The numbers on the abscissa refer to the number of iterations of the sample rejection, with a default of 3 SDs as the rejection threshold. The numbers in brackets on the abscissa refer to the rejection threshold in SDs when deviating from the default. "No clean" refers to no sample rejection being applied when computing AMICA. The colors denote the movement intensities: yellowlow, orange -low-to-medium, red -medium-to-high, violet -high. As can be seen in figure 3c, the highest RV values (indicating lowest physiological plausibility) for brain ICs (above 20%) were found in the Face Processing set, which was a low movement intensity study. The three datasets of Prediction Error, Spot Rotation (stationary), and Video Gaming, coming from three different movement intensity groups, formed the cluster with the lowest RVs of around 10%. Taken together, the movement intensity of a study had no clear effect on the quality of the brain ICs. This is further supported by the fact that the best and worst ratio of the number of brain ICs to their mean RV was attained by two studies of the high movement intensity group. However, an effect within studies was present in the Spot Rotation study. Its stationary condition yielded RVs that were around 2-3% lower than its mobile condition. The Beamwalking study showed almost identical RVs in both conditions. This within-study effect was even more clear in the ratio of the number of brain ICs to their mean RV, where both stationary conditions showed a noticeable increase over their mobile counterparts (see figure 3d). Figure 4a shows the number of ICs labeled as "other". The highest numbers of "other" ICs were attained by the datasets from the low-intensity group followed by the low-to-medium movement intensity group. The Prediction Error set, belonging to the medium-to-high movement intensity group, had 17-18 "other" ICs and therefore around 5 "other" ICs fewer than the cluster of the low-to-medium movement intensity group. The lowest number of "other" ICs had the high movement intensity study Auditory Gait reaching fewer than 15 "other" ICs. Not aligned with this trend were the Spot Rotation (mobile) and the Beamwalking (mobile) datasets stemming from the medium-to-high and high movement intensity group, respectively, but having similar amounts of "other" ICs as the medium-to-low cluster. Figure 4: Results for other ICs (a) and exemplary signal-to-noise ratio (SNR; b). The SNR was computed on the Spot Rotation study only. Shaded areas depict the standard error of the mean (SE). The numbers on the abscissa refer to the number of iterations of the sample rejection, with a default of 3 SDs as the rejection threshold. The numbers in brackets on the abscissa refer to the rejection threshold in SDs when deviating from the default. "No clean" (left plot) / "0" (right plot) refers to no sample rejection being applied when computing AMICA. "No ICA" (right plot) refers to the SNR values being computed on the raw dataset without any ICA cleaning. The colors denote the movement intensities: yellow -low, orange -low-to-medium, red -medium-to-high, violet -high. Lastly, in many cases, ICA is used as a means for data cleaning or extracting specific aspects of the signal. Hence, we used the SNR of an ERP in one exemplary study, Spot Rotation, to give a practical example of the effect of the use of ICA. As can be seen in figure 4b, this example revealed a lower SNR in the mobile condition. The effect of cleaning intensity on the decomposition quality Increasing the number of cleaning iterations and decreasing the rejection threshold resulted in more samples being rejected (figure 2a). The effect of cleaning iterations reached a ceiling after around 5 iterations and substantial increases in data rejection were only reached when in addition to more iterations the rejection threshold was lowered as well. As the MI is computed on the entire dataset but the ICA did not take the rejected samples into account, all studies showed a monotonous increase in MI with stronger cleaning (figure 2b). Nevertheless, the magnitude of this effect was only moderate as in most datasets the MI remained almost constant and within the range of their SEs. Two sets (Prediction Error, Spot Rotation (stationary)) showed a stronger increase in the low iterations but also leveled off after around 7 iterations. The number of brain ICs did not vary with increased cleaning intensity outside of the SE range for almost all studies (figure 3a). Only the Audiocue set showed an increase in low amounts of cleaning (3-5 iterations) of around 2 ICs but this effect reached a ceiling for stronger cleaning. Other sets exhibited small trends in the same direction, but no strong effect could be seen. All datasets except for Beamwalking (stationary) exhibited a slight decrease in mean RV with the first few iterations of the cleaning. This trend was within the range of the SE, however, and reached a floor after around 5 to 7 iterations. Combining this small trend with the small trend in the number of brain ICs, we did find a positive effect of time cleaning on the ratio of the number of brain ICs to their mean RV ( figure 3d). Here, all studies exhibited an increase in this ratio with increased cleaning, up to a point of around 7 to 10 iterations. Especially the datasets Video Game and Spot Rotation (stationary) showed around a 20% increase in this ratio. Considering the SNR, no effect of stronger time-domain cleaning was visible in our study (figure 4b). There was a pronounced effect when generally using the ICA as a means to select the signal of interest by removing all non-brain ICs, but the time cleaning itself did not noticeably affect the SNR as all variations were within the range of the SE. The interaction effect of cleaning intensity and movement intensity on the decomposition quality The changes in the amount of rejected data when cleaning was intensified were very similar for all studies (figure 2a). Only the Beamwalking (mobile) set responded stronger to reductions of the rejection threshold, yielding more rejected data compared to the other studies. More differences could be found when looking at the MI, where the Prediction Error set showed a stronger increase of MI in early cleaning iterations than the other studies, and the Spot Rotation (stationary) set showed a delayed level-off effect (figure 2b). Those datasets, however, contained different levels of movement intensity, and their respective movement intensity group members did not share these differences in trend. The number of brain ICs and their RV values also did not exhibit an interaction effect, as the changes with increased cleaning were either shared across datasets or did not vary systematically with movement intensity ( figure 3). Lastly, the exemplary SNR shows a similar result (figure 4b). In the mobile condition, the SNR remains constant across different cleaning intensities. The stationary condition shows slightly more variation but none outside of the SE range and no clear trend either. Discussion In this study, we investigated the effect of time-domain cleaning and of different levels of mobility on the independent component analysis of EEG data. For this, we used eight datasets from six openly available studies and applied the AMICA algorithm using its own automatic sample rejection option with varying strengths. We evaluated the decomposition quality on the basis of the component mutual information, the number of brain and muscle ICs as determined by the ICLabel classifier, as well as the brain IC residual variance. In addition, we measured the SNR in an exemplary application. We hypothesized that increased levels of mobility would lead to decreased decomposition quality, stronger cleaning would lead to an increase in decomposition quality, and higher mobility would require more cleaning to reach the best quality. While we found some indication that increased movement resulted in a worse decomposition and that moderate cleaning did improve the decomposition, these results are not conclusive and our hypotheses are not fully supported by the empirical data. Ambiguous effect of movement intensity The effect of movement intensity in the investigated datasets was modest at best. We found variations between studies in different metrics but these variations did not seem to depend on the movement intensity when comparing different studies. It could be assumed that higher movement intensities induce more muscle activity to be captured by the AMICA as muscle ICs. Since the number of ICs is limited by the number of channels, an increase in muscle ICs could come at the cost of a lower number of brain ICs. However, this does not seem to be the case in the present study. Neither did the physiological plausibility of the brain ICs vary systematically with movement intensity, as the best and worst scoring datasets were from the same movement intensity group. This lack of an effect of movement on the ICA decomposition quality could have several reasons: First, it may be the case that there simply is no adverse effect of mobility on data quality. However, another possible explanation could be that although we attempted to standardize the electrode layouts of the different studies, slight differences in the number and placement of electrodes remained, which could have affected the ICA. It has been shown that the number of electrodes does play a role in the number of resulting brain ICs and the quality of the ICA decomposition ), but we would not expect that effect to be so substantial that it results in large differences like the one observed in the present study. Another possible explanation could be that our classification of the datasets with respect to movement intensity primarily reflected the mobility of participants in the respective studies (sitting, standing, sitting and pointing, rotation on the sport, walking). Mobility, however, does not necessarily reflect the impact of movement on data quality or additional noise originating from biological and mechanical sources. Future studies should systematically employ alternative movement classification schemas that investigate different kinds of movements and their impact on the recorded EEG data quality. While slow walking might not impact mobile EEG recordings at all, upper torso and arm movements even in seated participants might be associated with head and cap movements that could lead to electrode micromovements associated with non-stationarity of the signal. Thus, even though participants' mobility is low while sitting and moving their arm, the movement itself might have a stronger impact than walking which can be considered the higher mobility condition. Finally, a major contributing factor to the decomposition quality might not be the mobility of the paradigm itself, but other aspects of the data recording such as the lab environment or the equipment used (Melnik et al., 2017). Hence, it cannot be ruled out that if identical equipment and paradigms are used, the anticipated negative effect of movement on the decomposition quality might be found. We had the opportunity to test this in addition to our larger comparison across studies since we included two studies that contained both a mobile and a stationary condition, allowing for a direct comparison of the results within the two studies. Both studies showed a decrease in the number of brain ICs in the mobile condition and an increase or no change in RV, resulting in a noticeable decrease in the ratio of the number of brain ICs to their mean RV. Although this did not hold true for other metrics such as the MI or the number of muscle ICs, this reduction in quality was also found in the SNR values of the Spot Rotation study. Taken together, it is likely that in a lab environment where everything is kept constant except the mobility of the participant, an increase in movement intensity will have a negative impact on the data. This negative impact, however, is less pronounced than anticipated, as we did not find it when comparing different studies from different lab setups. Moderate cleaning improved the decomposition The impact of cleaning intensity on the quality of the decomposition was smaller than anticipated. While different datasets scored differently on various metrics, these scores stayed for the most part within the SE range and not all datasets exhibited a noticeable effect. We did observe a positive effect of moderate cleaning on the number of brain ICs and their RV values, but the magnitude was limited, and some datasets exhibited almost no change. Additionally, some datasets required relatively strong cleaning to reach their maxima, while others showed a negative impact of too strong cleaning. This might be because the AMICA algorithm is more robust than anticipated and suitable for capturing or ignoring artifacts even without substantial cleaning. Especially considering physiological activity not stemming from the brain, removing single samples or small patches from the data does not remove the general activity of these sources. Thus, the AMICA algorithm will have to capture this activity regardless, and removing samples might not help much. Essentially, what researchers may consider artifacts in the data (such as eye movements, muscle activity, or recurrent cable sway from gait) is not necessarily an artifact for the underlying ICA model. If these signals are systematic and can be effectively modeled by the ICA, they will not be removed from the data and neither is it necessary to remove them beforehand. Artifact is a term from the user's perspective -the model is blind to such labels. Thus, only data that contains large, transient spikes or excessively strong other artifacts that can not easily be modeled by ICA but can be removed in the time-domain would benefit from cleaning. This could for example be time points where the participant was touching the EEG cap or other equipment, or moments where a virtual reality display is taken on or off. However, assuming that these artifacts are limited to periods of breaks or happen before or after the experiment, it might be suitable to just remove all non-experiment segments of the data and perform only minor additional time-domain cleaning before ICA. If it is essential to capture as many brain components as possible because one is interested in deep or unusual regions of interest and intends to perform source-level analysis, it might be justified to clean the data more strongly. However, in these cases, one must keep in mind that the resulting decomposition will not be able to fully capture the artifacts as it was not computed with them included. This may result in no relevant change in the actual measure to investigate, such as ERPs, as could be seen in the absent effect of time-domain cleaning on the exemplary SNR of the Spot Rotation data when all non-brain ICs were removed. 4.3 More movement did not require more cleaning As we expected an adverse effect of movement on the decomposition quality and a positive effect of time-domain cleaning, we also assumed that more cleaning would be necessary for data containing more movement. To our surprise, we found no trend indicating an interaction between the movement intensity and the required time-domain cleaning on the resulting decomposition quality. While we did find some indication for main effects, these trends were mostly shared across datasets in direction and magnitude. There were some exceptions but these were single datasets and their trend was usually not shared by the other dataset with the same class of movement intensity, and even if such an effect appeared, it was small. In accordance with our discussion of the expected main effects above, this again suggests that movement and thereby movement artifacts are less impactful on the ICA decomposition than previously assumed, and no substantially different time-domain cleaning is necessary for mobile EEG studies. Limitations and possible improvements As a first and major limitation, this study can only discuss the effects of the included datasets and does not necessarily generalize to other lab setups and experimental protocols. It was difficult to find an effect of variations in data processing without controlling for general data quality. A control for data quality, however, is not straightforward and would most likely only be possible when all investigated datasets share the same laboratory setup and recording equipment (Melnik et al., 2017), as well as experiment paradigm (such as an oddball task). Hence, although we tried to find a suitable amount of representative datasets with varying protocols, it would be favorable to have different studies repeat the same protocol in varying movement conditions. A taxonomy of different movement types such as gait, balancing, arm reaching, or tool use would be useful in this case, including a specification of the expected impact of these movement types on the EEG electrodes. Such a large dataset with consistent recording quality could help shed light on the smaller effects we found, especially since the results contradict our expectations. A second limitation is the measure of decomposition quality. We used the number of brain ICs as classified by ICLabel, and their RV as a proxy for decomposition quality, but this approach has two limitations: i) The body of data that ICLabel used to train the classifier did not contain sufficient examples from mobile experiments, meaning that the classification results might not be fully reliable in our context. Extending the classifier to MoBI or mobile EEG studies would alleviate this issue. ii) RV values might also be problematic to interpret, especially those of non-brain sources, which is why we did not take those into account. However, especially in the MoBI context, having more physiologically plausible muscle and eye ICs would also be of value, and this is impossible to measure using the current version of dipole fitting in EEGLAB. In the future, this can be done using HArtMuT, a head model that contains sources for eyes and muscles and can thus lead to more reliable estimates of the IC source and its RV (Harmening et al., 2022), but was not yet available at the time of this study. Another option to take into account is to investigate the SNR after data cleaning in more depth. This, however, would also require the same study to be repeated in different movement conditions, akin to the Spot Rotation SNR evaluation we performed. This would shed light on more practical implications of the investigated effects. A third limitation could be that we only used one method for time-domain cleaning. It is possible that other cleaning options could lead to different results. However, we believe that since the AMICA auto sample rejection uses its own objective metric, it is unlikely that the cleaning results will be substantially improved when using other algorithms. A separate investigation of the effect of a proposed time-domain cleaning algorithm in comparison with the AMICA auto sample rejection found no noticeable difference . Conclusions In our investigation of the effect of time-domain cleaning and movement intensity on the quality of the ICA decomposition, we did not find substantial evidence to support our hypotheses. While the expected adverse effect of movement on the data could be seen within studies, it is inconclusive between studies, pointing to the fact that lab setup, equipment, and possibly the paradigm itself might have a greater impact on the decomposition quality than the movement intensity. Additionally, while we did find some evidence that moderate cleaning prior to ICA computation improves the decomposition, this effect was far weaker than anticipated and it did not vary systematically with movement intensity in our study. This suggests that the AMICA algorithm is very robust and can handle artifacts even with limited data cleaning. We thus recommend not to remove substantial parts of the data using time-domain cleaning before running AMICA. Moderate amounts of cleaning such as 5 to 10 iterations of the AMICA sample rejection starting after 2 iterations with the default 3 SDs as threshold and 3 iterations between rejections will likely improve the decomposition in most datasets, irrespective of the movement intensity. Only in special circumstances, strong cleaning will be relevant and more beneficial.
11,309.2
2023-08-26T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
Biodegradation of Uric Acid by Bacillus paramycoides-YC02 High serum uric acid levels, known as hyperuricemia (HUA), are associated with an increased risk of developing gout, chronic kidney disease, cardiovascular disease, diabetes, and other metabolic syndromes. In this study, a promising bacterial strain capable of biodegrading uric acid (UA) was successfully isolated from Baijiu cellar mud using UA as the sole carbon and energy source. The bacterial strain was identified as Bacillus paramycoides-YC02 through 16S rDNA sequence analysis. Under optimal culture conditions at an initial pH of 7.0 and 38 °C, YC02 completely biodegraded an initial UA concentration of 500 mg/L within 48 h. Furthermore, cell-free extracts of YC02 were found to catalyze and remove UA. These results demonstrate the strong biodegradation ability of YC02 toward UA. To gain further insight into the mechanisms underlying UA biodegradation by YC02, the draft genome of YC02 was sequenced using Illumina HiSeq. Subsequent analysis revealed the presence of gene1779 and gene2008, which encode for riboflavin kinase, flavin mononucleotide adenylyl transferase, and flavin adenine dinucleotide (FAD)-dependent urate hydroxylase. This annotation was based on GO or the KEEG database. These enzymes play a crucial role in the metabolism pathway, converting vitamin B2 to FAD and subsequently converting UA to 5-hydroxyisourate (HIU) with the assistance of FAD. Notably, HIU undergoes a slow non-enzymatic breakdown into 2-oxo-4-hydroxy-4-carboxy-5-ureidoimidazoline (OHCU) and (S)-allantoin. The findings of this study provide valuable insights into the metabolism pathway of UA biodegradation by B. paramycoides-YC02 and offer a potential avenue for the development of bacterioactive drugs against HUA and gout. Introduction Uric acid (UA) is the end product of purine metabolism in humans with a molecular formula of C 5 H 4 N 4 O 3 (7,9-dihydro-1H-purine-2,6,8(3H)-trione) and a molecular weight of 168.11 Da [1]. UA metabolism involves complex processes; under normal conditions, the production and excretion of UA in humans are basically in a dynamic balance. In humans, UA cannot undergo oxidative degradation to more soluble compound allantoin due to mutation in the gene coding for uricase [2]. The absence of uricase results in the excretion of UA, which is mainly divided into two ways, of which two-thirds is excreted through the kidneys and one-third is excreted through the intestines [3]. Hyperuricemia (HUA) is a metabolic disease owing to UA underexcretion, overproduction, or both, which is defined as a serum UA level above 7 mg/dl in males and 6 mg/dl in females [4]. The global prevalence of HUA has increased significantly in recent years, with an overall trend of increase and rejuvenation [5]. The latest data showed that the prevalence of HUA in mainland China was 17.4%, and the prevalence in men was twice as much as in women [6]. Another survey revealed that the prevalence of HUA in the U.S. was 20.2% in males and 20.0% in females [7]. Early HUA has no clinical symptoms and is characterized only by elevated serum UA levels, but with the development of the disease, it may lead to gout [8], chronic kidney disease [9], cardiovascular disease [10], hypertension [11], type 2 diabetes mellitus [12] and other metabolic syndromes [13]. Therefore, the development of effective methods to manage HUA become a hotspot of current biomedical research. Currently, the management of HUA involves pharmacological treatment and dietary intervention. Chemical drugs used for HUA treatment can be classified into three main categories: xanthine oxidase (XOD) inhibitors (such as allopurinol, febuxostat, and topiroxostat), uricosuric agents (such as lesinurad, probenecid, and benzbromarone), and enzyme therapies (such as rasburicase and pegloticase) [14]. Their mechanisms of action include decreasing UA production, increasing UA excretion and metabolizing serum UA to allantoin. Exogenous medications are effective, but long-term use can cause varying degrees of damage to the body and can lead to reduced efficacy and allergic reactions [15]. For example, a report shows that benzbromarone has adverse effects on liver and kidneys function, which had been withdrawn from the most part of countries [16]. Another report shows allopurinol may cause mild rashes and severe skin reactions [17]. Although several novel drugs, including Ulodesine (an inhibitor of purine nucleoside phosphorylase), RLBN1001, and KUX-1511 (inhibitors of XOD and urate transporter 1) [18], are currently under development, the management of HUA still faces suboptimal outcomes. In addition, traditional Chinese medicine has been used to manage HUA and gout from a long time ago, such as Simiao Powder (Phellodendri amurensis cortex, Semen coicis, Radix achyranthes root, Atractylodis rhizoma) [19], Compound Tufuling Granules [20], Astragalus membranaceus [21], Sanghuangporus vaninii and Inonotus hispidus [22]. Chinese herbal medicines possess intricate compositions and exert their effects on multiple targets to reduce serum UA levels. They achieve this by targeting the UA transporter, inhibiting UA synthesis, alleviating inflammation, guarding against renal fibrosis, and modulating oxidative stress [23]. Moreover, gut microbiota can be modulated, and the abundances of beneficial bacterium can be increased by Chinese herbal medicines [24]. Compared to western medicine in the management of HUA, the unclear molecular mechanism of action greatly limits their clinical applicability [25]. In recent decades, an increasing number of studies have demonstrated that diet plays a crucial role in the development of HUA and gout. However, dietary interventions necessitate restricting the consumption of purine-rich foods, alcohol, and fructose. The recommended daily intake of purines in the diet is less than 400 mg in Japan. Excessive intake of purine-rich foods can lead to elevated serum UA levels [26], which is mainly due to the fact that exogenous purines are basically converted into UA in the human body. An excessive intake of alcohol and fructose can cause decreased UA excretion, increased UA production, or both. The primary reasons are their potential to affect the kidneys' normal excretion of UA and their dependence on substantial quantities of adenosine triphosphate (ATP) and phosphate for liver metabolism [27]. Undoubtedly, dietary intervention holds great significance not only in terms of economic considerations but also due to the potential adverse effects associated with pharmacological treatment. However, dietary interventions necessitate significant patient cooperation and long-term adherence [28], which often leads to low patient compliance. According to the available research, the microbial biodegradation of UA provides new research ideas for the treatment of HUA and gout. Some studies showed that lactic acid bacteria (LAB) could biodegrade UA and had the potential to intervene in the treatment of HUA. These LAB were isolated from various fermented foods, such as Lactobacillus plantarum Q7 (from yak yogurt) [29] and Limosilactobacillus fermentum JL-3 (from Jiangshui) [30]. But more studies about the amelioration of HUA by LAB focus on degrading purine compounds [31] and suppressing XOD activity [32]; many LAB do not have the ability to biodegrade UA directly. Others indicated that some Bacillus had the ability to produce uricase, including B. subtillis [33], B. licheniformis [34], B. thermocatenulatus [35] and B. cereus [36], but the ability of all bacteria reported in the biodegradation of UA was low. In this study, a bacterial strain that has a better ability in UA's biodegradation is first isolated from Baijiu cellar mud and identified as Bacillus paramycoides-YC02. The culture conditions were optimized, and both YC02 and its cell-free extract (CE) demonstrated the effective removal of UA, indicating the production of UA-biodegrading enzymes by YC02. Subsequently, the draft genome of YC02 was sequenced to find genes encoding UA biodegradation enzymes, leading to the elucidation of the biodegradation mechanism. These findings hold significant importance in the development of bacterioactive drugs targeting HUA and gout. Samples and Mediums UA with a purity of 99% was purchased from Aladdin Chemical Co. (Rogers, MN, USA), and all other chemicals used in this study were analytical grade. A bacterial strain used in this study was isolated from Baijiu cellar mud of Shandong Bandaojing Co., Ltd. Isolation of UA Biodegrading Bacteria Ten grams of Baijiu cellar mud samples were added to 100 mL of sterilized water, stirred thoroughly and left for 30 min; then, 5 mL of supernatant was taken into a 250 mL flask that contained 45 mL of modified UA-MSM and incubated for 7 days at 38 • C with the shake rate of 200 rpm in an incubator shaker. Every 7 days, 5 mL of the cultures was subcultured to fresh modified UA-MSM with the same culture conditions each time. The concentration of UA was increased from 0.5 to 4 g/L (0.5, 1, 2, 3, 4 g/L) [37]. After 5 weeks, the last cultures were continuously diluted and then inoculated with the spread method onto the LB agar plates and incubated for 48 h at 38 • C. Single colonies grown on the LB agar plates were picked and inoculated into modified MSM containing UA to test biodegradation abilities, which were repeated several times until a pure bacterial strain was isolated. Identification and Draft Genome Sequencing of YC02 The morphology of YC02 was observed by a microscope (CX41, Olympus, Tokyo, Japan). The YC02 was inoculated in LB medium and incubated for 48 h at 38 • C with the shake rate of 200 rpm. Then, 5 mL cultures of YC02 were used to extract genomic DNA with a bacterial genomic DNA kit. By using the extracted bacterial genome as the template, a pair of universal primers 27F (5 -AGAGTTTGATCCTGGCTCAG-3 ) and 1492R (5 -GGTTACCTTGTTACGACTT-3 ) were used to perform PCR [38]. The purified PCR products were sent to the Biotechnology Co. (Shanghai, China) for sequencing. The 16S rDNA sequences of selected strains were analyzed by BLAST comparison through GenBank and the Ribosomal Database Project. The phylogenetic tree was constructed based on the 16S rDNA gene sequences using the neighbor-joining method by MEGA 6.0. The culture solution of YC02 was prepared, and the bacterial precipitate was collected by the centrifugation of 10,000 rpm for 20 min at 4 • C, which was sent to Meiji Bio Co. (Shanghai, China) for sequencing the draft genome [39]. Optimization of UA Biodegradation Conditions The biodegradation experiments by YC02 were carried out in 20 mL sterilized modified MSM containing 500 mg/L UA and grown at 38 • C with the shake rate of 200 rpm for 72 h. The optical density (OD 600nm ) was measured to represent the bacterial growth. In this experiment, three factors of initial pH, temperature and initial UA concentration were Microorganisms 2023, 11,1989 4 of 13 tested as independent variables to investigate their effects on the biodegradation ratios of UA by YC02. The initial pH of modified MSM was set at 5.0, 6.0, 7.0, 8.0 and 9.0 (38 • C, initial concentration of UA at 500 mg/L); temperature at 20, 25, 30, 38 and 40 • C (initial pH 7.0, initial concentration of UA at 500 mg/L); and initial concentrations of UA at 100, 200, 500, 1000 and 1500 mg/L (initial pH 7.0, 38 • C). Every 12 h, 2 mL of cultures was taken for measuring UA concentration and OD 600nm . The corresponding results were calculated by the following formula: C0-Initial concentration of UA, mg/L; Ct-Residual concentration of UA in the sample at time t, mg/L. Biodegradation of UA by CE of YC02 The YC02 was inoculated in 20 mL sterilized LB medium and incubated for 48 h at 38 • C with the shake rate of 200 rpm. Then, the cultures of YC02 were centrifuged at 12,000 rpm for 20 min [40]. The sediments of YC02 cells were washed several times, re-suspended with sterilized phosphate-buffered solution PBS (pH 7.0), and then ultrasonicated for 25 min at 450 W [41]. The supernatant as CE was obtained by centrifugation at 10,000 rpm for 20 min at 4 • C. Then, 5 mL of CE was added to sterilized PBS (pH 7.0), with a UA concentration of 570 mg/L. The reaction was carried out at 38 • C with the shake rate of 200 rpm for 12 h. Afterwards, 0.5 mL of cultures was taken at 0, 1, 2, 5, 7, 9, 12 h for measuring UA concentration, respectively. The concentration of protein was determined with the BCA method [42]. Analysis of UA by HPLC UA was measured by HPLC (Shimadzu LC-20AT, Tokyo, Japan) [43]. The sample tested was diluted a certain number of times using 0.5 M NaOH solution and centrifuged at 12,000 rpm for 20 min. The supernatant passed through an aqueous phase microporous membrane (0.22 µm) and was used to detect concentration levels. Then, 20 µL mixed solution was extracted and analyzed by HPLC (chromatographic column: Kromasil C18 (4.6 × 250 mm, 5 µm-Micron); UV detection wavelength: 283 nm; mobile phase: methanol: 0.5% acetic acid aqueous solution (10:90); flow rate: 1 mL/min; column temperature: 35 • C). The standard curve between the content and peak area of UA was established, which was used to calculate the concentration of UA with peak area. Each sample was measured thrice; then, the results were averaged and marked with the standard deviation to represent UA concentration. Isolation and Identification of UA Biodegrading Strain The monoclonal colonies of YC02 were grown on the LB agar plate (Figure 1 left), which indicates the slightly shiny white color of YC02 colonies. YC02 is Gram-positive, and the rod-shaped cells and the central spores were observed under a light microscope with 1000× magnification (Figure 1 right), which is preserved in China General Microbiological Culture Collection Center (CGMCC NO: 22812). According to Figure 2, the association between YC02 and other closely related members reveals YC02's closest resemblance to Bacillus paramycoides, which was identified as a new species within the B. cereus group in 2017 by Yang Liu et al. [44]. The strain was identified as B. paramycoides-YC02 based on phylogenetic analysis of the 16S rDNA sequence. B. paramycoides was reported to biodegrade many organics. For example, a report showed that B. paramycoides could first degrade acephate to methamidophos, and then methamidophos was degraded to some small molecules by this bacterium [45]. Another report showed that B. paramycoides could utilize polyethylene as the sole carbon source [46]. In addition, it had been reported that B. paramycoides could produce a variety of enzymes [47][48][49], such as alpha-amylase, alkali thermos tolerant xylanase and ligninolytic enzyme. It also had the prospect of applications in soil remediation [50] and wastewater treatment [51]. To the best of our knowledge, no previous reports regarding UA biodegradation by B. paramycoides have been documented. According to Figure 2, the association between YC02 and other closely related members reveals YC02's closest resemblance to Bacillus paramycoides, which was identified as a new species within the B. cereus group in 2017 by Yang Liu et al. [44]. The strain was identified as B. paramycoides-YC02 based on phylogenetic analysis of the 16S rDNA sequence. B. paramycoides was reported to biodegrade many organics. For example, a report showed that B. paramycoides could first degrade acephate to methamidophos, and then methamidophos was degraded to some small molecules by this bacterium [45]. Another report showed that B. paramycoides could utilize polyethylene as the sole carbon source [46]. In addition, it had been reported that B. paramycoides could produce a variety of enzymes [47][48][49], such as alpha-amylase, alkali thermos tolerant xylanase and ligninolytic enzyme. It also had the prospect of applications in soil remediation [50] and wastewater treatment [51]. To the best of our knowledge, no previous reports regarding UA biodegradation by B. paramycoides have been documented. Effects of Culture Conditions on Biodegradation of UA by YC02 The effects of different initial pH, temperature and initial concentration on the growth and the biodegradation ratios of UA by YC02 were investigated. UA biodegradation ratios exceeded 97% at initial pH values of 7.0 and 8.0, with no significant difference According to Figure 2, the association between YC02 and other closely related members reveals YC02's closest resemblance to Bacillus paramycoides, which was identified as a new species within the B. cereus group in 2017 by Yang Liu et al. [44]. The strain was identified as B. paramycoides-YC02 based on phylogenetic analysis of the 16S rDNA sequence. B. paramycoides was reported to biodegrade many organics. For example, a report showed that B. paramycoides could first degrade acephate to methamidophos, and then methamidophos was degraded to some small molecules by this bacterium [45]. Another report showed that B. paramycoides could utilize polyethylene as the sole carbon source [46]. In addition, it had been reported that B. paramycoides could produce a variety of enzymes [47][48][49], such as alpha-amylase, alkali thermos tolerant xylanase and ligninolytic enzyme. It also had the prospect of applications in soil remediation [50] and wastewater treatment [51]. To the best of our knowledge, no previous reports regarding UA biodegradation by B. paramycoides have been documented. Effects of Culture Conditions on Biodegradation of UA by YC02 The effects of different initial pH, temperature and initial concentration on the growth and the biodegradation ratios of UA by YC02 were investigated. UA biodegradation ratios exceeded 97% at initial pH values of 7.0 and 8.0, with no significant difference Effects of Culture Conditions on Biodegradation of UA by YC02 The effects of different initial pH, temperature and initial concentration on the growth and the biodegradation ratios of UA by YC02 were investigated. UA biodegradation ratios exceeded 97% at initial pH values of 7.0 and 8.0, with no significant difference observed in YC02 growth (Figure 3a). These results show that the neutral and slightly alkaline conditions were favorable for the UA biodegradation by YC02, and the acidic environment would inhibit the growth. Consequently, the optimal biodegradation initial pH was determined to 7.0. Conversely, UA biodegradation ratios were significantly different among tested temperatures in range of 20-40 • C (Figure 3b), and it was 99.6% at 38 • C but declined to 30.8%, 50.9%, 78.6% and 86.4% when the culture temperature was 20, 25, 30 and 40 • C, respectively. These findings suggest that the optimal temperature for UA biodegradation is 38 • C. Furthermore, when the initial concentration of UA was below 500 mg/L, YC02 demonstrated biodegradation ratios exceeding 90.6%. However, as the initial UA concentration increased to 1000 mg/L and 1500 mg/L, the biodegradation ratios decreased to 22.8% and 10.6%, respectively (Figure 3c). These results indicate that both lower and higher initial UA concentrations limit and inhibit the growth of YC02. Consequently, the subsequent biodegradation experiments by YC02 employed the optimal culture conditions of initial pH 7.0 and 38 • C. °C but declined to 30.8%, 50.9%, 78.6% and 86.4% when the culture temperature was 20, 25, 30 and 40 °C, respectively. These findings suggest that the optimal temperature for UA biodegradation is 38 °C. Furthermore, when the initial concentration of UA was below 500 mg/L, YC02 demonstrated biodegradation ratios exceeding 90.6%. However, as the initial UA concentration increased to 1000 mg/L and 1500 mg/L, the biodegradation ratios decreased to 22.8% and 10.6%, respectively (Figure 3c). These results indicate that both lower and higher initial UA concentrations limit and inhibit the growth of YC02. Consequently, the subsequent biodegradation experiments by YC02 employed the optimal culture conditions of initial pH 7.0 and 38 °C. Biodegradation of UA by B. Paramycoides-YC02 and Its CE B. paramycoides-YC02 could completely remove 500 mg/L UA within 48 h; meanwhile, the OD600nm could reach above 1.2 and the logarithmic growth period of YC02 began at 12 h (Figure 4a). These results indicate that YC02 has a better ability in the biodegradation of UA than any other bacteria reported. The in vitro biodegradation of UA with L. plantarum Q7 had been studied by Jiayuan Cao et al. [29]. The biodegradation ratio could reach 81.30%. Moreover, the biodegradation ratios of nucleotides, nucleosides and purine by L. Biodegradation of UA by B. Paramycoides-YC02 and Its CE B. paramycoides-YC02 could completely remove 500 mg/L UA within 48 h; meanwhile, the OD 600nm could reach above 1.2 and the logarithmic growth period of YC02 began at 12 h (Figure 4a). These results indicate that YC02 has a better ability in the biodegradation of UA than any other bacteria reported. The in vitro biodegradation of UA with L. plantarum Q7 had been studied by Jiayuan Cao et al. [29]. The biodegradation ratio could reach 81.30%. Moreover, the biodegradation ratios of nucleotides, nucleosides and purine by L. plantarum Q7 were 99.97%, 99.15% and 87.35%, respectively. The in vivo biodegradation of UA with Limosilactobacillus fermentum JL-3 had also been studied by Ying Wu et al. [30]. After 15 days of intervention, JL-3 group could decrease UA levels in hyperuricemic mice, as the results showed that the UA levels significantly decreased to more than 30% in feces and urine. These reports have implications for our follow-up biodegradation experiments, and UA's biodegradation experiment in vivo needs to be performed. The CE of B. paramycoides-YC02 containing a protein concentration of 5.38 g/L could remove 570 mg/L of UA to 183 mg/L within 12 h (Figure 4b). In general, upon increasing the protein concentration of CE, the biodegradation ability is greatly improved [52]. The intracellular crude enzyme of YC02 exhibited high activity for up to 7 h, after which the enzyme activity gradually diminished. These results confirm YC02's capability to produce enzymes that facilitate the biodegradation of UA. However, despite this study's efforts, no discernible products were observed during the biodegradation of UA by B. paramycoides-YC02. the protein concentration of CE, the biodegradation ability is greatly improved [52]. The intracellular crude enzyme of YC02 exhibited high activity for up to 7 h, after which the enzyme activity gradually diminished. These results confirm YC02's capability to produce enzymes that facilitate the biodegradation of UA. However, despite this study's efforts, no discernible products were observed during the biodegradation of UA by B. paramycoides-YC02. Genomic Analysis and Metabolism Pathway for UA Biodegradation To delineate the mechanism of UA removed by YC02 treatment, the draft genome was sequenced using the Illumina Hiseq platform with paired-ends sequencing. It revealed a total length of 5,487,337 bp, with an average GC content of 35.14%. The reads were assembled into 66 scaffolds with an N50 of 641,208 bp, and 5609 protein-coding genes, 105 tRNA genes, 9 rRNA genes and 140 sRNA genes were predicted. The result of genome annotation revealed that 69.19% of genes (3881) were categorized into 21 different categories of COG (Figure 5a). Notably, 213 genes were associated with carbohydrates transport and metabolism (G), 356 genes were associated with amino acid transport and metabolism (E), 298 genes were associated with transcription (K), and 252 genes were associated with inorganic ion transport and metabolism (P). Furthermore, 4118 genes were annotated in the GO database, with 76.1% of genes (3135) attributed to molecular function (Figure 5b). The genes' proportion of ATP binding, DNA binding, hydrolase activity, metal ion binding and transferase activity were 7.58%, 7.01%, 4.67%, 3.58% and 3.16%, respectively. Additionally, 2538 genes were annotated in the KEGG database, and among them, 786 genes were associated with the general metabolic pathway, 267 genes were associated with amino acid metabolism, 239 genes were associated with carbohydrate metabolism, and 184 genes were associated with cofactors and vitamins metabolism (Figure 5c). Genomic Analysis and Metabolism Pathway for UA Biodegradation To delineate the mechanism of UA removed by YC02 treatment, the draft genome was sequenced using the Illumina Hiseq platform with paired-ends sequencing. It revealed a total length of 5,487,337 bp, with an average GC content of 35.14%. The reads were assembled into 66 scaffolds with an N50 of 641,208 bp, and 5609 protein-coding genes, 105 tRNA genes, 9 rRNA genes and 140 sRNA genes were predicted. The result of genome annotation revealed that 69.19% of genes (3881) were categorized into 21 different categories of COG (Figure 5a). Notably, 213 genes were associated with carbohydrates transport and metabolism (G), 356 genes were associated with amino acid transport and metabolism (E), 298 genes were associated with transcription (K), and 252 genes were associated with inorganic ion transport and metabolism (P). Furthermore, 4118 genes were annotated in the GO database, with 76.1% of genes (3135) attributed to molecular function (Figure 5b). The genes' proportion of ATP binding, DNA binding, hydrolase activity, metal ion binding and transferase activity were 7.58%, 7.01%, 4.67%, 3.58% and 3.16%, respectively. Additionally, 2538 genes were annotated in the KEGG database, and among them, 786 genes were associated with the general metabolic pathway, 267 genes were associated with amino acid metabolism, 239 genes were associated with carbohydrate metabolism, and 184 genes were associated with cofactors and vitamins metabolism (Figure 5c). In the present study, B. paramycoides-YC02 was found to be able to biodegrade UA. According to the results mentioned above, the genes and enzymes involved in the direct conversion of UA to allantoin were not found. But the flavin adenine dinucleotide (FAD)dependent urate hydroxylaseen encoded by gene2008 was found in the GO database. Meanwhile, riboflavin kinase and flavin mononucleotide (FMN) adenylyl transferase encoded by gene1779 were found in the KEEG database. Based on these results, UA's biodegradation pathway was identified ( Figure 6). Firstly, vitamin B 2 was converted to FMN by riboflavin kinase, and then, it was converted to FAD by FMN adenylyl transferase [53]. Afterwards, FAD-dependent urate hydroxylase converted urate to 5-hydroxyisourate (HIU) with the assistance of FAD [54]. However, HIU could be spontaneously broken down to 2-oxo-4-hydroxy-4-carboxy-5-ureidoimidazoline (OHCU) and (S)-allantoin in vitro at a slow non-enzymatic rate [55]. This is consistent with the general pathway of UA, which is catabolized by bacteria. In the present study, B. paramycoides-YC02 was found to be able to biodegrade UA. According to the results mentioned above, the genes and enzymes involved in the direct conversion of UA to allantoin were not found. But the flavin adenine dinucleotide (FAD)dependent urate hydroxylaseen encoded by gene2008 was found in the GO database. Meanwhile, riboflavin kinase and flavin mononucleotide (FMN) adenylyl transferase encoded by gene1779 were found in the KEEG database. Based on these results, UA's biodegradation pathway was identified ( Figure 6). Firstly, vitamin B2 was converted to FMN by riboflavin kinase, and then, it was converted to FAD by FMN adenylyl transferase [53]. Afterwards, FAD-dependent urate hydroxylase converted urate to 5-hydroxyisourate During the genomic analysis, apart from gene 1779 and gene 2008, we also identified additional genes capable of encoding enzymes involved in the purine metabolism pathway in humans (Table 1). These enzymes have the biodegrading ability on the precursors of UA synthesis for reducing the amount of UA production. In the past report, the whole genome sequencing of L. brevis DM9218 was performed by Haina Wang et al., the gene named ORF00084 was discovered, and they verified the inosine hydrolyzing ability of the gene product. The gene product was an inosine hydrolase, which can decrease UA production by degrading the inosine that is the most important precursor of UA synthesis [56]. Next, we will proceed to validate the biodegradation capability of inosine and guanosine by YC02 and its CE. Upon obtaining these results, we will initiate the cloning of genes from the YC02 genome and the construction of engineered bacteria for application in a rat model of HUA [57]. [58]. Recently, it has been reported that inflammation can affect the expression of UA transporters in UA reabsorption, which results in elevated serum UA levels [59]. BFA could attenuate renal inflammation and regulate the expression of urate transporters. Not only that, BFA could enhance the gut barrier and restore gut microbiota. This report indicates that Chinese herbal medicines fermented with microorganisms have During the genomic analysis, apart from gene 1779 and gene 2008, we also identified additional genes capable of encoding enzymes involved in the purine metabolism pathway in humans (Table 1). These enzymes have the biodegrading ability on the precursors of UA synthesis for reducing the amount of UA production. In the past report, the whole genome sequencing of L. brevis DM9218 was performed by Haina Wang et al., the gene named ORF00084 was discovered, and they verified the inosine hydrolyzing ability of the gene product. The gene product was an inosine hydrolase, which can decrease UA production by degrading the inosine that is the most important precursor of UA synthesis [56]. Next, we will proceed to validate the biodegradation capability of inosine and guanosine by YC02 and its CE. Upon obtaining these results, we will initiate the cloning of genes from the YC02 genome and the construction of engineered bacteria for application in a rat model of HUA [57]. At present, fermented Chinese herbal medicines with microorganisms for the management of HUA is also a focus of our current research. Ruoyu Wang et al. found that B. subtilis fermented Astragalus membranaceus (BFA) could decrease the serum UA levels in hyperuricemic mice [58]. Recently, it has been reported that inflammation can affect the expression of UA transporters in UA reabsorption, which results in elevated serum UA levels [59]. BFA could attenuate renal inflammation and regulate the expression of urate transporters. Not only that, BFA could enhance the gut barrier and restore gut microbiota. This report indicates that Chinese herbal medicines fermented with microorganisms have the potential to become a novel functional food for ameliorating HUA. In this study, we also want to know whether YC02 has the potential to ferment Chinese herbal medicine. However, in CAZy annotation, 119 genes encoding carbohydrate-active enzymes were founded, including Glycosyl Transferases (43 genes), Carbohydrate Esterases (38 genes), Glycoside Hydrolases (43 genes), Auxiliary Activities (13 genes) and Polysaccharide Lyases (1 gene). According to CAZy annotation, the fermentation of Chinese traditional herbal medicines by B. paramycoides-YC02 has potential promising applications in the prevention and treatment of HUA. In the following research, we also want to explore the potential therapeutic ability of B. paramycoides-fermented different Chinese herbal medicines on HUA, especially homologous medicines and foods (such as Poria cocos (Schw.) wolf, Polygonatum mill and Astragalus membranaceus). Conclusions In this study, we firstly isolated B. paramycoides-YC02, an efficient bacterium for biodegrading UA, from Baijiu cellar mud. Under the optimal culture conditions (initial pH 7.0, 38 • C), YC02 completely biodegraded an initial UA concentration of 500 mg/L within 48 h. Moreover, the CE of YC02, containing 5.38 g/L protein, successfully removed 387 mg/L of UA within 12 h. These findings clearly demonstrate YC02's remarkable capacity for UA biodegradation. Importantly, the draft genome analysis revealed the presence of gene1779 and gene 2008, encoding riboflavin kinase, FMN adenylyl transferase and FAD-dependent urate hydroxylase, which are involved in UA biodegradation. Notably, FAD-dependent urate hydroxylase plays a crucial role in the biodegradation process by converting urate to HIU with the assistance of FAD. Subsequently, HIU spontaneously breaks down into OHCU and (S)-allantoin. These findings shed light on the metabolic pathway employed by YC02 in UA biodegradation. Additionally, our investigation unveiled numerous genes that encode enzymes responsible for the biodegradation of UA precursors and carbohydrate-active enzymes. These findings open up new avenues for future research on decreasing serum UA levels, including the biodegradation of inosine, guanosine and XOD inhibition. Not only that, we can also combine YC02 with Chinese herbal medicines to develop functional foods. We anticipate that these results will provide valuable insights into the amelioration of HUA and gout.
7,163.8
2023-08-01T00:00:00.000
[ "Environmental Science", "Biology" ]
The Insulation for Machines Having a High Lifespan Expectancy , Design , Tests and Acceptance Criteria Issues The windings insulation of electrical machines will remain a topic that is updated frequently. The criteria severity requested by the electrical machine applications increases continuously. Manufacturers and designers are always confronted with new requirements or new criteria with enhanced performances. The most problematic requirements that will be investigated here are the extremely long lifespan coupled to critical operating conditions (overload, supply grid instabilities, and critical operating environments). Increasing lifespan does not have a considerable benefit because the purchasing price of usual machines has to be compared to the purchasing price and maintenance price of long lifespan machines. A machine having a 40-year lifespan will cost more than twice the usual price of a 20-year lifetime machine. Systems which need a long lifetime are systems which are crucial for a country, and those for which outage costs are exorbitant. Nuclear power stations are such systems. It is certain that the used technologies have evolved since the first nuclear power plant, but they cannot evolve as quickly as in other sectors of activities. No-one wants to use an immature technology in such power plants. Even if the electrical machines have exceeded 100 years of age, their improvements are linked to a patient and continuous work. Nowadays, the windings insulation systems have a well-established structure, especially high voltage windings. Unfortunately, a high life span is not only linked to this result. Several manufacturers’ improvements induced by many years of experiment have led to the writing of standards that help the customers and the manufacturers to regularly enhance the insulation specifications or qualifications. Hence, in this publication, the authors will give a step by step exhaustive review of one insulation layout and will take time to give a detailed report on the standards that are linked to insulation systems. No standard can provide insurance about lifespan, nor do any insulation tests incorporate all of the operating conditions: thermal, mechanical, moisture and chemical. Even if one manufacturer uses the standards compliance to demonstrate the quality of its realization; in the end, the successful use in operation remains an objective test. Thereafter, both customer and manufacturers will use the standards while knowing that such documents cannot fully satisfy their wishes. In one 20-year historical review, the authors will highlight the duration in insulation improvements and small breakthroughs in standards writing. High lifespan machines are not the main interest of standards. A large part of this publication is dedicated to the improvements of the insulation wall to achieve the lifespan. Even if the choice of raw materials is fundamental, the understanding of ageing phenomena also leads to improvements. Introduction The asynchronous machines (AM) are electrical machines which are used in systems where reliability is set as the first requirement.The robust design of their squirrel cage rotor dramatically improves their lifespan.This is one reason why they account for more than 90% of electrical motorization of critical systems, including, for example, the primary coolant pumps in nuclear power stations.Even if this type of rotors does not contain materials that are chemically unstable, this is not the case for stators that have a winding insulation.Several studies (done by IEEE: Institute of Electrical and Electronics Engineers, EPRI: Electric Power Research Institute, IEC: International Electro technical Commission, etc.) and REX (Return on EXperience-data coming from motors in operation) into the reliability of motors conclude that insulation defects are the first reason for machines outages.Insulation design is an arduous job.Two separated domains can be identified.The first one is the domain which is covered by the standards such as IEC, IEEE, NEMA, IS (Indian Standards) or BS (British Standards).In this field of interest, the conditions of use are not described in detail and the period of guarantee, where the supplier is in charge of defects, is short compared to the expected lifespan (2 years compared to a 20-yearlifetime).The second domain of interest is about the systems which need an extremely long lifespan (40 years and more).In this field of interest, all ageing events are carefully investigated during the design and manufacturing stages.This means that each event or criterion which are described have to be examined and integrated into the design of the machine.That is a hard job for the customer who is in charge of writing the specification.This is also a hard job for the designer who has to comply with these specific criteria.However, it is not correct to imagine that such activities are in oppositions.In introducing an unrealizable condition in the specifications, the customer will have a negative influence on the design and especially in its price, which could dramatically increase.Engineers who are in charge of writing specifications must have a good overview of the machine design, their necessary performances, manufacturing processes and procedures.The manufacturer must keep in mind that requirements are coupled to penalties if they are not satisfied.It is not always a good idea to promise everything.Hence, customers, in requiring unrealistic performances will induce an incredible increase in prices and the manufacturers that will accept unreachable performances will induce financial penalties at the final step.Looking at the reliability of electrical machines, insulation is one of the most important elements: any fault in design or during manufacturing can result in a dramatic impact on the lifespan and it is not always possible to discover this during the factory acceptance tests.Insulation is one element which is very difficult to design: everyone wants a low number of acceptance criteria and wants specifications which are adequate to insure the compliance of the machine.Even if advices are provided by standards such as IEC 60505, such standards do not provide exhaustive procedures and acceptance criteria which can meet all needs of the customers.To describe such a complex situation, authors have separated this publication into several sections which are as autonomous as possible.The first chapter introduces the arrangement of elements in the insulation wall.It introduces at the same time its dramatic impact on lifespan and provides the first analysis of standards.The next chapter begins with the conclusion that was reached previously.If the lifespan of a machine must be improved, all the influences of ageing parameters and their modeling must be analyzed.Thereafter, it appears that several standards can provide usable data or models but are not able to insure the lifespan as most of them do not accept the mix of ageing parameters.Nevertheless, standards obviously provide a technical base and a methodology that have to be followed.The next section introduces a historical example.This example is used to demonstrate the difficulty in undertaking improvements of insulation.Innovations can be considered as having a positive impact at one time and the effect of this can be reduced a few years later.Standards organizations are in awaiting any ideas that can initiate improvements that can be a benefit for everyone.However, about lifespan, they must wait a long time in order to conclude whether suggested improvements are worthwhile.Nevertheless, the role of standards is not in describing the manufacturing or design process but rather in the precise writing of methods or measurements which can provide without any question an incontestable opinion on the performances of the system.This is Machines 2017, 5, 7 3 of 34 why the standards have retained one of the pieces of apparatus built by Fuji-Electric during the last 60 years.Such an apparatus has initiated the partial discharges diagnostics.Thereafter, the authors highlight other improvements, which are linked to manufacturing processes and are currently in use but are not cited in the standards.The next section comes back to the insulation system and points out its weakest part.Due to the structure of the insulating wall that has been defined throughout the past 50 years, the use of polymer resins is unavoidable.Improvement in lifespan is directly linked to resin improvement and its using in manufacturing process.This section demonstrates the main drawback of the resin: It is an organic material that evolves over time and that does not appropriately behave in front of thermal aging.Nevertheless, manufacturers have to live with and they have imagined processes, such as the VPI (Vacuum Pressure Impregnation), which can overcome some drawbacks.The fifth section takes into account the knowhow which can improve the insulation lifespan.At the end, the best manufacturing process will be the one that does not make any errors.The authors, step by step show the difficulty of obtaining a "good" insulating wall.The standards only provide objective measurements and skilled persons are required to interpret the results.Hence it will appear that invalidated tests can pollute the acceptance process.In this section, a large number of potential defects are investigated and the methods used to mitigate them are presented.The methods can depend on manufacturing process but also can depend on design process.At the end of the publication, the last section focuses on several well-known issues which are still relevant. Environment Overview of Insulation Systems The electrical machine insulation can be damaged by many factors which are currently indicated by a coding including the severity of the environment.There is not always a specification or a criterion for each code.This coding is listed through six letters and one digit.The general description of the insulation system codes is given in Table 1.All these factors are recognized as ageing parameters.For example: letter D, used for the machine Duty cycle, has been identified as having a great impact on the ageing of the insulation.An electrical machine, having several run/stop in a day, is more subject to abrasion of insulation.The machines used for base load operation are not so stressed.Nevertheless, all factors have not been studied with equal importance.Therefore, when a parameter has a little studied effect, manufacturers and customers are helpless for taking it into account.There are not standards or technical sheets to solve the issue.However, the main ageing factor, which is well-known, is the thermal aging.Many standards exist and provide procedures and test methods for estimating the lifetime of an insulating system submitted to thermal stress (Ul-746, UL-1446, IEC-60216, IS-11182, etc.).Many other factors cannot be well-quantified and should be studied later.For example, the customer does not always know how many times his machine will operate under overvoltage or overload conditions as well as their duration.Studies periodically confirm that thermal aging is responsible for the greatest number of outages [1] and subsequently, a good method to initiate the design of electrical insulation is to first deal with the effect of thermal aging.Thermal aging does not apply to all elements of an insulation system.All the elements or materials used do not have the same behavior in front of this constraint.Hence, it may be possible to increase the expected lifespan of an insulation system by modifying only one of its components: the weakest one.The constitution of insulation wall must be examined, and, in particular, the arrangement of the insulation layers in machines which should have a high service life.These machines are essentially high voltage machines.They use a well-known insulation system which is composed of 3 elements that will be presented thereafter. Letter Code Meaning T (first digit) Thermal factor E (second digit) Electrical factor A (third digit) Ambient (environmental) M (fourth digit) Mechanical factor P (fifth digit) Performance (intended) D (sixth digit) Duty (mode of operation) The Specificity of High Voltage Insulation System, Functional Aspect The insulation of a high voltage winding can be seen as copper strands wrapped by insulation layers.The numerous insulation tapes surrounding the copper strands are not set up in random.The tapes are applied butt-lapped or overlapped.Two tapes make a layer.The coil insulation located inside the slots of the stator core and the insulation located outside the slots are subject to different constraints.For this reason, the authors dissociate their analysis into two parts.Depending on the winding design, the coil turn can be composed by one wire or several wires; each wire is insulated by insulation layers.Hence each wire can be covered by turn to turn insulation or by strand insulation with an additional turn to turn insulation.The turn to turn insulation has two aims, the first one is to achieve an insulation between turns (low thickness as the voltage between turns is very low) and the second is related to the electrical surges caused by lightning stresses, electrical surges induced by DOL Switch-Off/Switch-On or electrical stresses generated by circuit breakers and IGBT [2].The standards, IEC and IEEE, request that turn insulation must be able to withstand a surge pulse.In observing the IEC standard, the test voltage is 0.65 times the surge voltage (4 Un + 5 kV) and the rise time is around 0.2 µs.The surge test also takes into account the main insulation; it must be able to withstand a surge (4 Un + 5 kV) rise time 1.2 µs.The case of one turn composed of more than one wire is usually encountered when the copper cross section is too large to be bent.Such a turn is achieved by putting elementary wires of smaller section in parallel.In this situation, it is preferable to apply a thin insulation for electrical separation of the elementary wires composing the turn and additional insulation designed to withstanding the surge voltage (Figure 1). The Specificity of High Voltage Insulation System, Functional Aspect The insulation of a high voltage winding can be seen as copper strands wrapped by insulation layers.The numerous insulation tapes surrounding the copper strands are not set up in random.The tapes are applied butt-lapped or overlapped.Two tapes make a layer.The coil insulation located inside the slots of the stator core and the insulation located outside the slots are subject to different constraints.For this reason, the authors dissociate their analysis into two parts.Depending on the winding design, the coil turn can be composed by one wire or several wires; each wire is insulated by insulation layers.Hence each wire can be covered by turn to turn insulation or by strand insulation with an additional turn to turn insulation.The turn to turn insulation has two aims, the first one is to achieve an insulation between turns (low thickness as the voltage between turns is very low) and the second is related to the electrical surges caused by lightning stresses, electrical surges induced by DOL Switch-Off/Switch-On or electrical stresses generated by circuit breakers and IGBT [2].The standards, IEC and IEEE, request that turn insulation must be able to withstand a surge pulse.In observing the IEC standard, the test voltage is 0.65 times the surge voltage (4 Un + 5 kV) and the rise time is around 0.2 μs.The surge test also takes into account the main insulation; it must be able to withstand a surge (4 Un + 5 kV) rise time 1.2 μs.The case of one turn composed of more than one wire is usually encountered when the copper cross section is too large to be bent.Such a turn is achieved by putting elementary wires of smaller section in parallel.In this situation, it is preferable to apply a thin insulation for electrical separation of the elementary wires composing the turn and additional insulation designed to withstanding the surge voltage (Figure 1).One strand or multi-strand conductor contains turn insulation.Its aim is to insulate strand (s) and also sustain the voltage surge provided by lightning or induced by non-sinus inverters.When the cross-section is too large, the skin effect can appear and reduces the effective cross-section.The strand is therefore subdivided to mitigate such an effect and each wire is insulated (Strand insulation). Thereafter, the main insulation layer is added.Its thickness is designed according to the voltage supply.The main insulation exists in the stator slot and outside the slot.In case of voltage supply greater than 6 kV, additional layers are added.In the stator slot, this last layer is a conductive layer, namely Conductive Armor Tape.Its aim is to ensure the same potential along the coils: the ground potential (Figure 2).Such a layer seals the main insulation in the slot.Even if this layer is considered to be a conductive layer, designers should not select a tape with a low resistivity.Looking at the situation, the Conductive Armor Tape covers the entire slot surface.This means that such a layer can close the electrical circuit provided by two adjacent silicon iron sheets which are in the stator core.Eddy currents may appear in the stator core and may deteriorate this conductive layer.A low resistivity material induces high eddy currents which will age this layer and such a case is not One strand or multi-strand conductor contains turn insulation.Its aim is to insulate strand (s) and also sustain the voltage surge provided by lightning or induced by non-sinus inverters.When the cross-section is too large, the skin effect can appear and reduces the effective cross-section.The strand is therefore subdivided to mitigate such an effect and each wire is insulated (Strand insulation). Thereafter, the main insulation layer is added.Its thickness is designed according to the voltage supply.The main insulation exists in the stator slot and outside the slot.In case of voltage supply greater than 6 kV, additional layers are added.In the stator slot, this last layer is a conductive layer, namely Conductive Armor Tape.Its aim is to ensure the same potential along the coils: the ground potential (Figure 2).Such a layer seals the main insulation in the slot.Even if this layer is Machines 2017, 5, 7 5 of 34 considered to be a conductive layer, designers should not select a tape with a low resistivity.Looking at the situation, the Conductive Armor Tape covers the entire slot surface.This means that such a layer can close the electrical circuit provided by two adjacent silicon iron sheets which are in the stator core.Eddy currents may appear in the stator core and may deteriorate this conductive layer.A low resistivity material induces high eddy currents which will age this layer and such a case is not favorable for a long lifespan [3].Hence, the resistivity of this layer has to be chosen in accordance to the resistivity of the main insulation.A few kΩ for Conductive Armor Tape is a good choice.This can set the potential of this layer to the ground potential without any doubt.The main controversy is the minimum resistance of the surface.Two researchers, Liese and Brown, have studied in large air-cooled generators, the vibration sparking.Such events have been caused by a too-conductive coating in the Conductive Armor Tape [4,5]. Machines 2017, 5, 7 5 of 34 favorable for a long lifespan [3].Hence, the resistivity of this layer has to be chosen in accordance to the resistivity of the main insulation.A few k for Conductive Armor Tape is a good choice.This can set the potential of this layer to the ground potential without any doubt.The main controversy is the minimum resistance of the surface.Two researchers, Liese and Brown, have studied in large air-cooled generators, the vibration sparking.Such events have been caused by a too-conductive coating in the Conductive Armor Tape [4,5].Outside the slot, the question is different.The windings do not touch the iron parts that are at ground potential.Hence, there is no need of a conductive layer.But, a high level of electrical stress appears at the end of the Conductive Armor Tape.The electric field in this area is large enough to initiate corona discharges that will rapidly age the insulation.Air can sustain an electrical field, if this one exceeds a value, an electrical discharge appears.Outside the insulation wall, it is called corona discharge.If such a discharge appears in the insulation wall, it is called partial discharge.The authors will investigate more precisely on the last phenomenon at the end of the article.Excess in discharges will lead to outages in a short time.The well-known solution for machines supplied by a supply voltage greater than 6 kV is the use of a semi-conductive layer which will scatter the electrical field and reduce the maximal electrical stress induced by the Conductive Armor Tape.The stress-grading issue will be described later in the article, here, only the functional aspect is presented here.This stress-grading (semi conductive layer) is not applied over the entire part of the end windings but only over a short section (Figure 3).The manufacturing process involves a bending of coils to adapt the coils to the geometry of the stator and allowing their insertion in the slots.It is usual to have, for this part of winding, an excess of insulation thickness or some folded tapes.When such a work is done by hand, a lower wrapping quality is encountered [6].Even if such coil parts are not in the slot, they also need to be manufactured with care.Outside the slot, the question is different.The windings do not touch the iron parts that are at ground potential.Hence, there is no need of a conductive layer.But, a high level of electrical stress appears at the end of the Conductive Armor Tape.The electric field in this area is large enough to initiate corona discharges that will rapidly age the insulation.Air can sustain an electrical field, if this one exceeds a value, an electrical discharge appears.Outside the insulation wall, it is called corona discharge.If such a discharge appears in the insulation wall, it is called partial discharge.The authors will investigate more precisely on the last phenomenon at the end of the article.Excess in discharges will lead to outages in a short time.The well-known solution for machines supplied by a supply voltage greater than 6 kV is the use of a semi-conductive layer which will scatter the electrical field and reduce the maximal electrical stress induced by the Conductive Armor Tape.The stress-grading issue will be described later in the article, here, only the functional aspect is presented here.This stress-grading (semi conductive layer) is not applied over the entire part of the end windings but only over a short section (Figure 3).The manufacturing process involves a bending of coils to adapt the coils to the geometry of the stator and allowing their insertion in the slots.It is usual to have, for this part of winding, an excess of insulation thickness or some folded tapes.When such a work is done by hand, a lower wrapping quality is encountered [6].Even if such coil parts are not in the slot, they also need to be manufactured with care. The Design of High Voltage Insulation Systems and the Main Raw Materials The design of one insulation system is the result of many years of experimentations and investigations on machine failures during their operation.On the contrary, a few machines in operation have an incredibly long lifespan.They must be taken into consideration as they can initiate ideas that could be used to refine the design rules that are available to engineers.One important aspect, which has been confirmed for forty years, is the choice of the main material to be used in the insulation system.Even if a multi-layer structure is used.All the layers can be made using the same insulating tape.Nowadays, the main components are fiberglass tape and mica with suitable bonding materials.The most important material in the layer is the mica.Its outstanding dielectric, thermal endurance, inertness, and the non-flammable properties of this material put it at the top of any list of insulation material [7].The mica can tolerate the highest electrical field; it can theoretically sustain an electrical field of 140 kV/cm.It is surpassed by the barium titanate which can sustain 1760 kV/cm.However, this material is not user-friendly when insulated tapes are to be produced.Mica is a stone and cannot be used as such (Figure 4).The crystalline structure of mica forms layers that can be split or delaminated into thin sheets.These sheets are chemically inert, dielectric resistant, and flexible.Companies that produce insulating tapes, have to reconstitute a Mica layer using these sheets (thickness between 0.025 to 0.125 mm).The reconstituted mica layer is bonded to a flexible support such as fiberglass tape or PET tape.Even if this support can act as insulation and can improve the performance of the mica-tape, it is not its main goal.The mica layer does not have a high mechanical strength and needs such a support The Design of High Voltage Insulation Systems and the Main Raw Materials The design of one insulation system is the result of many years of experimentations and investigations on machine failures during their operation.On the contrary, a few machines in operation have an incredibly long lifespan.They must be taken into consideration as they can initiate ideas that could be used to refine the design rules that are available to engineers.One important aspect, which has been confirmed for forty years, is the choice of the main material to be used in the insulation system.Even if a multi-layer structure is used.All the layers can be made using the same insulating tape.Nowadays, the main components are fiberglass tape and mica with suitable bonding materials.The most important material in the layer is the mica.Its outstanding dielectric, thermal endurance, inertness, and the non-flammable properties of this material put it at the top of any list of insulation material [7].The mica can tolerate the highest electrical field; it can theoretically sustain an electrical field of 140 kV/cm.It is surpassed by the barium titanate which can sustain 1760 kV/cm.However, this material is not user-friendly when insulated tapes are to be produced.Mica is a stone and cannot be used as such (Figure 4).The crystalline structure of mica forms layers that can be split or delaminated into thin sheets.These sheets are chemically inert, dielectric resistant, and flexible.Companies that produce insulating tapes, have to reconstitute a Mica layer using these sheets (thickness between 0.025 to 0.125 mm). The Design of High Voltage Insulation Systems and the Main Raw Materials The design of one insulation system is the result of many years of experimentations and investigations on machine failures during their operation.On the contrary, a few machines in operation have an incredibly long lifespan.They must be taken into consideration as they can initiate ideas that could be used to refine the design rules that are available to engineers.One important aspect, which has been confirmed for forty years, is the choice of the main material to be used in the insulation system.Even if a multi-layer structure is used.All the layers can be made using the same insulating tape.Nowadays, the main components are fiberglass tape and mica with suitable bonding materials.The most important material in the layer is the mica.Its outstanding dielectric, thermal endurance, inertness, and the non-flammable properties of this material put it at the top of any list of insulation material [7].The mica can tolerate the highest electrical field; it can theoretically sustain an electrical field of 140 kV/cm.It is surpassed by the barium titanate which can sustain 1760 kV/cm.However, this material is not user-friendly when insulated tapes are to be produced.Mica is a stone and cannot be used as such (Figure 4).The crystalline structure of mica forms layers that can be split or delaminated into thin sheets.These sheets are chemically inert, dielectric resistant, and flexible.Companies that produce insulating tapes, have to reconstitute a Mica layer using these sheets (thickness between 0.025 to 0.125 mm).The reconstituted mica layer is bonded to a flexible support such as fiberglass tape or PET tape.Even if this support can act as insulation and can improve the performance of the mica-tape, it is not its main goal.The mica layer does not have a high mechanical strength and needs such a support The reconstituted mica layer is bonded to a flexible support such as fiberglass tape or PET tape.Even if this support can act as insulation and can improve the performance of the mica-tape, it is not its main goal.The mica layer does not have a high mechanical strength and needs such a support that ensures its holding.Such a compound has lost certain properties of the original mica.The mica-tape is not able to sustain the previous electric field, but it has retained important properties.Mica is not degraded by electrical discharges and is not affected by thermal aging.Chemically, mica can be expressed by the following formulation [8]. In which: The potassium atom (K) is the material which induces good protection against the stress due to the electric field [9].Two varieties of mica are regularly used in insulation systems [10].The muscovite: KAl 2 AlSi 3 O 10 (OH) 2 , and the Phlogopite: KMg 3 AlSi 3 O 10 (F,OH) 2 .All the elements which are added to the mica are only components which help the mica to stay in a situation where these marvelous properties can be exploited.Fiberglass is a layer which provides the mechanical properties.The wrapping of copper conductors with mica and fiberglass tapes is not enough.The surrounding air must be replaced by another material which has better insulation properties.In addition, this material must be liquid in its initial state to creep into all the voids, and solid in its last state to seal the insulation wall.This is the reason why, resins such as epoxy resin or polyester resin are used.During the manufacturing process, the epoxy resin replaces the air in the insulation wall and just after, is cured.It helps the insulation wall become a rigid element.Although this method seems adequate, the resin is an organic compound and does not have stable behavior over time.It is not an inert material.It is denatured by water, which produces moisture and initiates teeing [11,12]; the polymerized chains which have been produced during the curing sequence can be broken or naturally split by time and temperature.This is a natural aging.Therefore, a long lifespan in insulation systems will be obtained if the elements, other than mica, are well-designed and their behaviors are well-known.It is obvious that an insulation system, using a compound of different materials, will be limited by the weakest material. As a conclusion, it appears that the insulation wall is an arrangement of insulation tapes.Each element of this arrangement has a single purpose related to the electrical stresses.Tapes must be able to sustain the highest level of breakdown voltage.Such a requirement is done using the mica as raw material.The mica is the most important material that is present in an insulating layer.It is also the only chemically inert material.As it has a high electrical breakdown voltage and is thermally stable up to 500 • , this is not this material that will limit the lifespan of the insulation wall.The other elements, such as the epoxy resin or the organic components mixed with, will not behave so favorably.To ensure that the insulation wall will have a long lifespan, there is no other solution than doing tests.The tests are described in the standards but are not able to cover all requirements.The main test to be carried out is the evaluation of the thermal endurance of this insulation system.In doing such a test, additional measurements are done.They provide additional elements related to thermal aging.They will be used during the life of the machine as references for monitoring the degradation of insulation wall. Insulation System and its Aging A perfect insulation wall does not exist.Even if the epoxy resin fills all the voids which exist in the insulation tapes, it remains that the resin is an organic material which properties will evolve over time.Hence, lifespan assessment is a hard job.During the design process, engineers must integrate some data that involves the thickness and the composition of the insulation wall.These data are the voltage supply or the rated voltage, the operating temperature, the mechanical stresses, etc. Regarding machines with a very long lifespan, since the number of manufactured machines is low; there is not enough information from operational situations that could help designers.Sometime, new machines are manufactured before the end of life of the first manufacturing orders.Given this situation, improvements to insulating walls are more difficult to achieve.Specifications may vary from previous production.In any case, engineers do not begin with nothing.The standards suggest the use of the previous insulation system as a reference in the qualification process of a new insulation wall.The insulation in electrical machines is a compound: epoxy-resin, mica and fiberglass.Keeping epoxy resin in the manufacturing process of insulation walls is not a wrong idea.Studies are always carried out to increase knowledge of its ageing process and how to improve its thermal stability.The resin is an organic material, which thermal behavior is understood but not always manageable.The first phenomenon that appears in the resin is the increase of polar products.This results from a degradation process (oxidation, breaking of chemical bonds, etc.) which occurs even if this material is not subject to excessive electrical, chemical, mechanical or thermal stresses.The most well-known aging process is the thermally activated degradation reaction.This degradation occurs at any temperature and the reactions increase dramatically with temperature [13].Regarding such degradation process, it is expected that the lifespan of the insulation will follow the degradation curve of the resin. The Thermal Ageing in Front of Standards Hence, thermal aging is a phenomenon which is understood and documented.About the resin, standards exist and are useful in determining the lifetime of a polymeric material used in electrical equipment.The UL and IEC standards give a relationship between the thermal aging of an insulation material and its lifespan.The expected lifetime of the insulation system is 20,000 h at a temperature which is the highest temperature allowed by the insulation class (Table 2).In the insulation system, the aging test is successful if the sample after accelerated aging, can withstand 50% of the initial voltage breakdown.In the event of a breakdown, a puncture appears in the insulation wall. Thermal Classes Maximal Permissive Temperature The aging of the resin is linked to the molecular behavior.During the manufacturing process, the molecules are highly reactive and the epoxide groups act with polyamide, organic acid or acid anhydride to produce a cross linked thermosetting solid resin [14].In these materials, two types of links exist.The first is a strong primary bond (Covalent); the second is a weak bond (Van der Waals).Aging is a local change in the structure of the links.Primary bonds (Covalent) are broken and only weak bonds remain.Chain splits produce free radicals that can trigger other chain reactions.Such degradation accelerates with an increase in temperature.It is usually assumed that the rate of this degradation has the empirical Arrhenius form [15].The main feature of this formulation is its exponential form (1). A small increase or decrease in temperature has a great impact on the lifetime.For example, it is usual to say that lifespan (L) is divided by 2 if the temperature only increases by about 10 • C. here L is thermal endurance time in hours; T is the temperature in K; A and B are constants dependent on environmental conditions. Equation (1) is mainly related to one material and not to the insulation system.Nevertheless, it is recognized that the lifespan of the insulation wall will follow the form of its weakest component.Unfortunately, the unique way to determine the parameters presented in Equation ( 1) is in using test benches and these test benches must be done with real coils using the insulation system to be evaluated.It is not permitted to mix several manufacturing processes because these also have a great influence on aging.As the standard suggests an expected lifespan of 20,000 h, it is not realistic to run tests which will last more than two years.The standards anticipate this drawback and suggest accelerated aging tests, obtained by increasing the temperature. First of all, the end-of-life criteria must be determined.About the insulations that are used in high power electrical machines, the criterion is the electrical breakdown: U breakdown .Such a parameter is measured on coils that have no aging.After being submitted to the Maximum allowed temperature for 20,000 h, the specimens must withstand 50% of U breakdown without damage.It is necessary to determine the thermal steps and the sequences of aging.For example, for an insulation system that should withstand a temperature of class F (155 • C).Standard IEC shows the following experimental sequences for the test bench (Table 3).Within this test and after applying the end-of-life criteria, one curve can be drawn using the recommendations.This is the thermal endurance graph (Figure 5).Looking at this graph, three important values are written.The first one is TI (Temperature Index) here at 155 • C (Class F).The second one is HIC (Halving interval) here 10 • C, and the last one is the lifespan: 20,000 h.HIC = 10 • C, means that insulation system can withstand a temperature of 165 • C (155 • C + HIC) during 10,000 h.Hence, in limiting the operating temperature of a machine using class F insulation to 120 • C, the thermal lifespan will be greater than 200,000 h (25 years of continuous duty).The usual commercial argument is therefore demonstrated.Class F (155 • C) insulation and class B thermal stress (120 • C), lifespan: 200,000 h. Machines 2017, 5, 7 9 of 34 benches and these test benches must be done with real coils using the insulation system to be evaluated.It is not permitted to mix several manufacturing processes because these also have a great influence on aging.As the standard suggests an expected lifespan of 20,000 h, it is not realistic to run tests which will last more than two years.The standards anticipate this drawback and suggest accelerated aging tests, obtained by increasing the temperature.First of all, the end-of-life criteria must be determined.About the insulations that are used in high power electrical machines, the criterion is the electrical breakdown: Ubreakdown.Such a parameter is measured on coils that have no aging.After being submitted to the Maximum allowed temperature for 20,000 h, the specimens must withstand 50% of Ubreakdown without damage.It is necessary to determine the thermal steps and the sequences of aging.For example, for an insulation system that should withstand a temperature of class F (155 °C).Standard IEC shows the following experimental sequences for the test bench (Table 3).Within this test and after applying the end-of-life criteria, one curve can be drawn using the recommendations.This is the thermal endurance graph (Figure 5).Looking at this graph, three important values are written.The first one is TI (Temperature Index) here at 155 °C (Class F).The second one is HIC (Halving interval) here 10 °C, and the last one is the lifespan: 20,000 h.HIC = 10°C, means that insulation system can withstand a temperature of 165 °C (155 °C + HIC) during 10,000 h.Hence, in limiting the operating temperature of a machine using class F insulation to 120 °C, the thermal lifespan will be greater than 200,000 h (25 years of continuous duty).The usual commercial argument is therefore demonstrated.Class F (155 °C) insulation and class B thermal stress (120 °C), lifespan: 200,000 h.Such a result is not fully satisfactory because it does not take into account the environment of the machine during its expected lifetime.Any change can have a significant impact on the lifespan.Even though G. Stone in 2004 emphasized that over sizing the insulation wall was often used in machines to prevent premature failure [16], he had to conclude several years later that the Such a result is not fully satisfactory because it does not take into account the environment of the machine during its expected lifetime.Any change can have a significant impact on the lifespan.Even though G. Stone in 2004 emphasized that over sizing the insulation wall was often used in machines to prevent premature failure [16], he had to conclude several years later that the competition in the cost reduction does not provide a positive result about the lifespan enhancement [3].During 20 years, it has to be noticed that machines have an increase greater than 50 % in power density.This increase is also linked to a decrease in insulation thickness.Such a situation heralds suitable conditions for unexpected ageing under voltage stresses.Hence, some manufacturers built new machines with an insulation system having higher level of partial discharges in the insulation wall [17].In introducing one improvement, in one part of the machine or in increasing the rated characteristics, all the other parts have to sustain the new operating conditions.When such an upgrade is observed on real machines, it is usual to have to deal with new problems in parts of machines which have never been under investigation.Improvements have initiated new conditions of use.Standards will not provide any help as they are written on passed experiments or return of experience.In a few pages, the Section 4 will give a short description of the history of one insulation system.It will be explained that improvements are not always improvements and industrial researches can lead to potential evolutions. Ageing Parameter Other Than Thermal Ageing in Front of Standards The thermal aging is documented, but it is not the unique source of stress.The relationships between the other constraints and the endurance time are similar.This remains an exponential relationship.This is the case of the voltage aging which is written in Equation ( 2).The voltage endurance test is based on IEC or IEEE standards.It can be used to compare two insulation systems in the same voltage endurance [18].A material having a higher dielectric breakdown will have a longer lifetime.Such a property will be seen in "c" and "n" parameters (c increases or n decreases).An immediate application of this Equation ( 3) is to evaluate the lifetime of an insulation system using a power supply that induces a monitored overvoltage.The test duration decreases. where L is voltage endurance time in hours; E is voltage; c and n are constants which dependent on other environmental conditions. where L1 and L2 are the lifespan at the voltage E1 and E2. Frequency also has a similar impact.It is presented in Equation ( 4).Such equations take a great importance when observing the accelerated ageing.It can be done with an increase of the frequency or an increase of the voltage [19]. where L1 and L2 are the lifespan at the frequencies f1 and f2. As a conclusion, the influence of many parameters on the lifespan is quantified.Nevertheless, there are no absolute results.Comparison with other insulation system is the key of the lifespan assessment.The standards provide only a guidebook on methodology.Hence, the standards have achieved their goal of providing a possible comparison between insulation materials.However, the standards do not provide any indication about the combination of constraints.Moreover, many stresses are out of the scope: mechanical stress, moisture, starting conditions, etc.Manufacturers are aware of these limits and believe that existing standards need to be improved.This is why they continually do researches about aging phenomena.Sometimes such improvements are significant and lead to standards revisions.Therefore, in the next section, a study case from Fuji-Electric history will show the incidence of manufacturers on standards. Standards Evolutions and Experimental Aspects During the standard ageing test, the coils are not powered and are not submitted to mechanical stresses.Moreover, the standards suggest that any added mechanical stress or electrical stress, should not introduce any significant additional aging during the thermal aging test.This means that such stresses cannot modify the ageing test.They can only be considered as diagnostic factors.Designing an insulation system using only Figure 5 will provide unrealistic information about the lifespan of the motor in operation.That is why manufacturers use additional tests and diagnostics to evaluate the lifetime of their new insulation process.This is especially true when new elements or materials are added.For example, in 1992, the Electric Power Research Institute (EPRI) released a report on the lifespan assessment.In this report, several insulation systems were observed.It begins with the old micafolium or asphalt-mica insulation systems, and it ends with the recent epoxy-mica or polyester-mica insulation systems.Within the EPRI analysis, although epoxy or polyester resin provides a great improvement in insulation, they also introduce an issue in the acceptance criteria.How can one provide insulation quality insurance when using new materials?Such an evolution will be initiated by the manufacturers and their proprietary tests.Many manufacturers have a history full of events in which twists and turns are usual.As the aim is not to write the whole history, only one manufacturer is used as an example. Standards Evolution and Manufacturing Progress in Insulation Knowledge, a Historical Example In 1959, Fuji-Electric Company published an article about their new insulation system which uses epoxy resin [14].In this article, Fuji compares the properties of Shellac/mica, Polyester/mica and their proprietary insulation system: epoxy-resin/glass (namely F-RESIN).A large number of parameters, such as flexural modulus, breakdown voltage, dissipation factor, weight change in submerged oil environment were examined.At that time, there were no standards for these aging or diagnoses.Fuji-Electric has done several thermal aging tests, a set about the mechanical properties and one another about the breakdown voltage.The duration used is short compared to actual duration indicated in the standards.Fuji-Electric does not exceed 500 h at 180 • C, which is less than the time indicated by IEC.Nevertheless, Fuji-Electric heralds some important aspects that are now in common use.In 1959, the corona discharges, outside the insulation wall, are in interest.The voltage level used to power machines is high enough to initiate an electrical ageing phenomenon, now known as partial discharges.Fuji-Electric has built one of the first corona pulses counter.In doing so, they attest that their new insulation system is not sensible to corona discharges.They infer that the result is related to their vacuum and high pressure impregnation system coupled to the excellent fluidity of F-RESIN.With this new process they thought that they had no void in the insulation wall.Moreover, they imagined that their solution could remain mica-free as they considered mica as a source of void.History shows that epoxy resin is not void-free; the mica-tape can be associated to epoxy-resin.The epoxy resin does not behave well under a long thermal ageing.The epoxy resin cannot be used alone.The main result from the Fuji-Electric experiments is the use of corona pulse counter.Such a device has become one usual apparatus often used by the manufacturers to evaluate the quality and degradation of insulation wall. During the year 1972, the Fuji-Electric Company, publishes another article about the F-Class "Stabilastic" TM insulation system [20].The mica tape and the epoxy resin are in regular use.The corona effect and partial discharges are monitored by the designers and new environmental conditions have been incorporated.One of them is the high switching surges, induced by the new vacuum switches.No standard or data are available to integrate this operating condition.The manufacturer develops a test bench.The coils are heated to the maximal temperature (155 • C) and a high voltage is applied at 500 Hz.In doing so, Fuji-Electric uses accelerated aging methods.By studying the results of the breakdown voltage after aging, Fuji-Electric concluded that F-Class "Stabilastic" TM insulation can withstand switching surges.The voltage aging tests were then developed.The coils are powered by several different voltages, low to high voltage, and time to reach the breakdown is recorded.Hence, using these results, it can be concluded that, up to the rated voltage, the expected lifetime of the machines will be over 100 years.Unfortunately, this test is done without additional stress but the manufacturer knows that he must do the same by including all of the stresses.Moisture is an issue and needs to be tested on the test benches that provide the lifespan of this insulation system confronted to possible moisture pollution.Fuji-Electric has performed a water immersion test on coils for a long time (80 h).Such a test has similarity with the test associated with IEEE 429.It should be noticed that such a test is only applied to new motors.Even though the insulation system successfully completes the water immersion test, the standards do not provide insurance for the rest of the lifetime.If the customer specifications concern the whole life of the machine, it should be a good idea to test the stator windings by water immersing at its end of lifetime or by doing the same test after its decommissioning. During the year 1979, Fuji Electric was investigating on the mechanical properties and qualification of coils insulation [21].The F-Resin has been used for 20 years and data from operating machines are now available.They show that reliability must take into account the number of starts.Its effect has been observed in machines used in the pumping up in power plants.The manufacturer has identified that insulation breakdowns are due to electro-magnetic forces and thermal stresses induced by motor starts.They mainly noticed the influence of the coil deformation on ∆tan(δ) (Tip-Up).Such an effect is linked to the mechanical fracture of the insulation layer induced by large displacements of the strands.They may appear when the motor starts, if the end-windings are not properly-fixed.This behavior should be coupled with mechanical fatigue.Fuji Electric has linked the average stress that acts on the coils to the number of solicitation.Doing so, designers can determine the lifetime of the motor as soon as they know the number of starts.Another field of interest is the short-circuit situation.The strains induced by the short circuits could not be studied with the same method.The manufacturer has built an impact tester that replicates the short-circuit effect on a coil.They also studied the temperature distribution during steady state.Using a machine running at rated power, they observed that a high thermal stress is encountered in the insulation wall, precisely at the end of the core.Since this area was already concerned by high electrical field, it becomes a weak part of the winding.By increasing the length of the conductive tape outside the iron sheets stack; this will put away the high stressed area.Such an idea is summarized in Figure 6.Notice, that the length of the conductive layer also depends on the vicinity of the iron parts at ground potential.To prevent high electric fields between the end of the conductive layer and iron parts, this outside length must be adapted.Hence, by studying one mechanical effect, Fuji-Electric also discovers other issues and the last recommendation enhances a well used design rule. water immersion test on coils for a long time (80 h).Such a test has similarity with the test associated with IEEE 429.It should be noticed that such a test is only applied to new motors.Even though the insulation system successfully completes the water immersion test, the standards do not provide insurance for the rest of the lifetime.If the customer specifications concern the whole life of the machine, it should be a good idea to test the stator windings by water immersing at its end of lifetime or by doing the same test after its decommissioning. During the year 1979, Fuji Electric was investigating on the mechanical properties and qualification of coils insulation [21].The F-Resin has been used for 20 years and data from operating machines are now available.They show that reliability must take into account the number of starts.Its effect has been observed in machines used in the pumping up in power plants.The manufacturer has identified that insulation breakdowns are due to electro-magnetic forces and thermal stresses induced by motor starts.They mainly noticed the influence of the coil deformation on tan() (Tip-Up).Such an effect is linked to the mechanical fracture of the insulation layer induced by large displacements of the strands.They may appear when the motor starts, if the end-windings are not properly-fixed.This behavior should be coupled with mechanical fatigue.Fuji Electric has linked the average stress that acts on the coils to the number of solicitation.Doing so, designers can determine the lifetime of the motor as soon as they know the number of starts.Another field of interest is the short-circuit situation.The strains induced by the short circuits could not be studied with the same method.The manufacturer has built an impact tester that replicates the short-circuit effect on a coil.They also studied the temperature distribution during steady state.Using a machine running at rated power, they observed that a high thermal stress is encountered in the insulation wall, precisely at the end of the core.Since this area was already concerned by high electrical field, it becomes a weak part of the winding.By increasing the length of the conductive tape outside the iron sheets stack; this will put away the high stressed area.Such an idea is summarized in Figure 6.Notice, that the length of the conductive layer also depends on the vicinity of the iron parts at ground potential.To prevent high electric fields between the end of the conductive layer and iron parts, this outside length must be adapted.Hence, by studying one mechanical effect, Fuji-Electric also discovers other issues and the last recommendation enhances a well used design rule.As this area will also undergo electrical field stresses, the conductive layer is elongated by a short length after the end of the stack, (b), doing so, the thermally affected area is now outside the electrically stressed area which is red coloured (c).In addition, semi-conducting layer will be added. Insulation Knowledge, Qualification and Acceptance Tests It is sure that other study cases can be found by observing other machine manufacturers.ABB, General electric, Siemens, Mitsubishi Electric, Westinghouse, Toshiba, Hitachi, Alstom, etc. have contributed in the past and are still truly contributing to the development of the sector.We should not forget also the insulation manufacturers which provide the mica-tape and any raw materials used in insulation wall.Nevertheless, the latest studies, carried out by Fuji-Electric, have not been At the end of the iron sheet stack, the thermal stresses encountered in certain designs is high enough to speed up the aging of the insulation.This affects the area which is in yellow in (a).As this area will also undergo electrical field stresses, the conductive layer is elongated by a short length after the end of the stack, (b), doing so, the thermally affected area is now outside the electrically stressed area which is red coloured (c).In addition, semi-conducting layer will be added. Insulation Knowledge, Qualification and Acceptance Tests It is sure that other study cases can be found by observing other machine manufacturers.ABB, General electric, Siemens, Mitsubishi Electric, Westinghouse, Toshiba, Hitachi, Alstom, etc. have contributed in the past and are still truly contributing to the development of the sector.We should not forget also the insulation manufacturers which provide the mica-tape and any raw materials used in insulation wall.Nevertheless, the latest studies, carried out by Fuji-Electric, have not been followed by a standard draft.Even if the mechanical stresses are present in actual standards, they are not included as parameters in aging but only as pre-diagnostic parameters associated with a short mechanical stresses cycle.In the UL standard, the coils samples are mounted on a vibration table and are exposed for 60 min to a sinusoidal vibration at a frequency around 60 Hz, with a constant acceleration of 14.7 ms -2 .Such an environment is therefore far away from a real start or short circuit conditions of electric motors.The standards do not correspond to the real mechanical operating conditions of machines because the real operating conditions are very hard to quantify and the accelerated tests are not provided, otherwise the qualification can be performed by doing all the required engine starts. Each manufacturer has experienced several potential defaults in the design or testing of new machines or machine parts.Industrial companies have no obligation to broadcast these information's as they are a part of their personal knowledge.Even though the vacuum pressure impregnation (VPI) is now a widespread process, each manufacturer has its own chemistry and its own winding design.They do all the technical tasks for their own benefit.This is why published articles by the manufacturers about the insulation system have similar interests: they introduce the enhancements of manufacturers, but do not reveal any know-how [22]. Another environmental aspect that must be taken into account is the humidity parameter and it concerns each manufacturer and also each customer.Water can be absorbed by the epoxy resin.Even if this water can be removed with drying, it has a negative impact on the molecular chains.In the same way as thermal aging, chain scissions occur and lead to a decrease in mechanical strength [23].Water and moisture had not always been a danger for insulation.They became a parameter of aging when the main binder is epoxy resin or a water-sensitive material.For many years, the epoxy resin was assimilated to a water resistant material.Even if this behavior seems to be respected at the beginning of the material life, such a property will not be systematically verified at the end of lifespan and especially after 30 or 40 years.Looking at the epoxy resin under the macroscopic scale, it appears as a hard material having a regular and smooth surface; surface that should not trap the water or the moisture.This conclusion is contradicted by others users of epoxy resin: the shipbuilders.The shipyards prefer the epoxy resin rather than the polyester resin, the polyester resin is destroyed by osmosis and cannot accept any water contact.The epoxy resin does not accept any water contact, but not for the same reason.The epoxy resin is not watertight.Water can penetrate the epoxy by capillarity and can initiate damages in molecular chains.A decrease in the mechanical properties would not be so hazardous for the epoxy used in insulation systems, but the intrusion of moisture or water into the insulation wall will induce other effects.This process leads to insulation breakdowns by teeing.Looking at the standards, acceptance tests do not accept water intrusion into the insulation wall.They suggest drying the machine in case of moisture.No acceptance test can definitively qualify the insulation system in the presence of a moist environment or accept insulation degradation due to moisture during the expected lifetime.However, the monitoring of the insulation wall during servicing periods can detect this aggression. Bonding Material: the Epoxy Resin, the Weakest Part of the Insulation System The epoxy resin has been used as a bonding material since the 20th century.The development of this synthetic resin was initiated in 1940.The polyester resin was also developed during the same period.Both were immediately considered as possible material for the insulation of electrical machines.In 1949, Westinghouse used mica-tape and polyester resin in its insulation system [24].General-Electric follows them two years later.In 1958, G.E. has used the mica-tape and the epoxy resin for a new insulation system [25].Both insulation systems exhibited better performances under high temperature conditions compared to older insulation systems (asphalt-mica).The polyester resin, even if it can provide a lowering of costs, does not tolerate water at high temperature [26].Manufacturers of electrical machines were not the unique users of the epoxy resin.Thus, the maritime sector initiated studies about epoxy resin, water absorption and mechanical strength [27].In this report, the authors studied the influence of curing and water absorption on the behavior of an amine-epoxy resin.Even if for an insulation system, the mechanical properties are not the determining elements.Designers need information on glass transition temperature and moisture absorption.The glass transition temperature (Tg) depends essentially on the curing temperature and also depends on the curing time.When a resin is completely cured, Tg is at its highest level.For the insulating system, the epoxy resin should be considered as fully cured.The authors observed that the glass transition temperature increases with curing even if the polymerization is reached.This means that a fully cured resin can evolve.Overheating may occur in the electrical machine when the operating temperature is high.The result will be a material that move away from equilibrium state and therefore will have more free volume and a greater propensity to absorb water. The epoxy resin used in the insulation system is a mixture of elementary epoxy bases, hardener and catalyst.The epoxy resin can provide a lifespan that is able to reach 20 years.Extending the lifespan up to 30 or 40 years is not a goal easy to achieve without regular improvements.Therefore, the great improvement came from the manufacturing process.The VPI is a manufacturing process which appears in 1956.Dr. Meyer in collaboration with Westinghouse (Electric Company) applies this basic process for the complete filling of all the interstices of the insulated components.The first application that used the Impregnation system is based on Bitumen Bonded Mica Flake tape.The insulation coils were introduced in an autoclave, vacuum dried and then a high melting bitumen compound was added.When the stator is totally immerged in the compound, pressure is applied to assist the penetration.The filling is also enhanced by heating the bitumen compound to decrease its viscosity.At the end of the process, the unused bitumen compound is removed from the autoclave.Nowadays, the mica-glass tape and the epoxy resin were substituted for the benefit to this insulation system.The process has not radically changed.The mica tape is wrapped around the strand.The coils are inserted in the slot.The stator with the coils is introduced in the vacuum chamber.Vacuum is done and the epoxy resin is added.The epoxy resin must be liquid enough to fill the entire interstice.At the end, vacuum is removed and pressure is applied.Thereafter, the epoxy resin is cured at high temperature to obtain a rigid and indestructible insulation wall. Enhancements are not always a customer or a manufacturer choice.During year 2014, a European patent was published under the exclusive trademark of ABB Research Ltd [28].Their epoxy resin is designed to be volatile-free.It means that there was no volatile solvent.This single epoxy resin composition also has a prolonged pot life, which is suitable for the storage and a processing temperature within the range of 40 • C to 70 • C. As expected, this epoxy resin has good electrical properties as well as a low viscosity, at these temperatures, which is required in impregnation process.Note that enhancement of this insulation system is related to health and not related to industrial improvements (REACH recommendations in which styrene, used as solvent, is forbidden).In fact, Designers have to find an answer about two requirements which are usually contradictory: a long pot life and a short gel time.For electrical insulators using aromatic epoxy resin compounds, the material, frequently used, is the bisphenol A (DGEBA) (Figure 7).Such a long molecular chain cannot ensure alone a hard material.The hardener is the main actor for this behavior.The hardener is an anchor point for the epoxy groups.If the hardener has only two reactive groups, it only provides a simple molecular chain without mechanical rigidity.To generate a tri-dimensional structure, the hardener must have a greater number of active sites (Figure 8). When the reaction begins, the first reaction introduces a chain extension.That is a suitable behavior as it generates a material which is still soft and can creep into small voids.Thereafter, the cross linking reaction takes place and initiates a tri-dimensional structure.One of the most used hardeners is the polyamine having two or more primary amino groups -NH 2 as anchor points.Such a hardener provides high cross linking (Figure 9).electrical properties as well as a low viscosity, at these temperatures, which is required in impregnation process.Note that enhancement of this insulation system is related to health and not related to industrial improvements (REACH recommendations in which styrene, used as solvent, is forbidden).In fact, Designers have to find an answer about two requirements which are usually contradictory: a long pot life and a short gel time.For electrical insulators using aromatic epoxy resin compounds, the material, frequently used, is the bisphenol A (DGEBA) (Figure 7).Such a long molecular chain cannot ensure alone a hard material.The hardener is the main actor for this behavior.The hardener is an anchor point for the epoxy groups.If the hardener has only two reactive groups, it only provides a simple molecular chain without mechanical rigidity.To generate a tri-dimensional structure, the hardener must have a greater number of active sites (Figure 8).When the reaction begins, the first reaction introduces a chain extension.That is a suitable behavior as it generates a material which is still soft and can creep into small voids.Thereafter, the cross linking reaction takes place and initiates a tri-dimensional structure.One of the most used hardeners is the polyamine having two or more primary amino groups -NH2 as anchor points.Such a hardener provides high cross linking (Figure 9).Regarding machines having an extremely long lifespan, these chemical reactions are of great importance.First of all, the polymerization is never perfectly completed when the stator leaves the curing vessel.Even if several impregnations and curing are carried out, the resin lying between the layers will not be completely inert.This fact is found on machines in operation where an increase in insulation resistance is measured during the first months of operation.Indeed, the polymerization continues to be performed with the heat generated by the motor when it is running.Therefore, when the acceptance tests are performed in the factory, they are done on one insulation system which may not be in a stable state.That can be an issue for machines which are stored as spares. About the manufacturing, even if the chemistry is well-understood [29] and even if reactions can be managed by catalytic systems, the first goal remains in a complete filling of the insulation system by removing all the bubbles.The resin must reach and fill all the internal layers, from the Such a long molecular chain cannot ensure alone a hard material.The hardener is the main actor for this behavior.The hardener is an anchor point for the epoxy groups.If the hardener has only two reactive groups, it only provides a simple molecular chain without mechanical rigidity.To generate a tri-dimensional structure, the hardener must have a greater number of active sites (Figure 8).When the reaction begins, the first reaction introduces a chain extension.That is a suitable behavior as it generates a material which is still soft and can creep into small voids.Thereafter, the cross linking reaction takes place and initiates a tri-dimensional structure.One of the most used hardeners is the polyamine having two or more primary amino groups -NH2 as anchor points.Such a hardener provides high cross linking (Figure 9).Regarding machines having an extremely long lifespan, these chemical reactions are of great importance.First of all, the polymerization is never perfectly completed when the stator leaves the curing vessel.Even if several impregnations and curing are carried out, the resin lying between the layers will not be completely inert.This fact is found on machines in operation where an increase in insulation resistance is measured during the first months of operation.Indeed, the polymerization continues to be performed with the heat generated by the motor when it is running.Therefore, when the acceptance tests are performed in the factory, they are done on one insulation system which may not be in a stable state.That can be an issue for machines which are stored as spares. About the manufacturing, even if the chemistry is well-understood [29] and even if reactions can be managed by catalytic systems, the first goal remains in a complete filling of the insulation system by removing all the bubbles.The resin must reach and fill all the internal layers, from the deepest strand insulation to the upper conductive layer.Such an impregnation is more difficult for high voltage machines; the number of layer in the insulation system is particularly important.Two contradictory goals must be achieved during this process; mica tapes should be firmly clamped on the copper conductor to ensure continuity of the insulation and must also be loose enough to allow Regarding machines having an extremely long lifespan, these chemical reactions are of great importance.First of all, the polymerization is never perfectly completed when the stator leaves the curing vessel.Even if several impregnations and curing are carried out, the resin lying between the layers will not be completely inert.This fact is found on machines in operation where an increase in insulation resistance is measured during the first months of operation.Indeed, the polymerization continues to be performed with the heat generated by the motor when it is running.Therefore, when the acceptance tests are performed in the factory, they are done on one insulation system which may not be in a stable state.That can be an issue for machines which are stored as spares. About the manufacturing, even if the chemistry is well-understood [29] and even if reactions can be managed by catalytic systems, the first goal remains in a complete filling of the insulation system by removing all the bubbles.The resin must reach and fill all the internal layers, from the deepest strand insulation to the upper conductive layer.Such an impregnation is more difficult for high voltage machines; the number of layer in the insulation system is particularly important.Two contradictory goals must be achieved during this process; mica tapes should be firmly clamped on the copper conductor to ensure continuity of the insulation and must also be loose enough to allow the creeping of the epoxy resin.Such a requirement is always in the interest of manufacturers and patents such as US Patent 4,918,801 suggests routine tests to be performed on individual coils in order to verify the degree of resin filling between coils turns [30].Thus, in this chapter, the authors have introduced a few elements related to the chemistry of polymers.These few lines remind that issues involved by organic chemistry will not be solved before a long time.Manufacturers must mitigate such issues by designing new manufacturing methods or in introducing, for the critical process steps, measurements which are able to verify the resin filling.The cited patent is linked to such measurement.Improvement in manufacturing process is well illustrated by the VPI.The next chapter will deepen this process and examine step by step the criticality arrangement of insulation wall. The Defects and How to Find Them The customers have at their disposal a large number of standardized tests that are able to provide information on the status of the insulation.However, these tests are not dedicated to machines with specifications.There is no test that is able to insure the lifetime of the machine.Machines with a long lifetime are machines which should reach the decommissioning date and be able to withstand all the events described in the specifications until the end of the last day.In the end, the best test is to check the machine at the end of its life!It is an unrealistic view, manufacturers will not accept this.Only tests that are following the factory release of the machine are acceptable.About the acceptance tests that may be suggested, three categories can be identified.First of all, the GO/NO-GO tests: they provide no information, but only a "passed" criterion.Next are tests that can be linked to numerical values with objective criteria.Such as Polarization Index that can be related to a status of the insulation wall.The last tests, such as Partial Discharges are not real criteria for the acceptance but are very important results for monitoring the insulation during its lifetime.In the acceptance test, partial discharges can be low and this does not necessarily mean that the lifespan is high.Degradations can further occur due to the future condition of environment.Many tests, like this one are tests that provide the initial status of the machine insulation.The requirements must be related to the state of the art and also related to data coming from monitored machines.When an outage occurs, it is easy to point out the parameter that is out of limits and do a reverse analysis.The customer can focus on his own know-how.Another solution is to introduce criteria into standardized measurements.The customer must integrate into his engineering team, machine designers that are able to translate the symptoms that can initiate defects into acceptance test criteria.Usual tests are summarized in the following table (Table 4).These acceptance tests may be associated to the contractual requirements but the acceptance tests used during insulation qualification cannot be cited in such a document (Table 5) since they take into account the insulation system and not the machine.In selecting values for the criteria, the customer can improve the insulation system performances.The customer must have a good insight into this implication.The parameters associated with the criteria are highly dependent on the technology used by the supplier.Not all technologies are interchangeable and do not have the same areas of use.The VPI is suitable for large series and RR (Resin Rich) is suitable for high power machines such as alternators (several hundred MW).Lifespan is not based on standards listed in Table 4 but is related to standards listed on Table 5.It is obvious that manufacturers will do their best to qualify their insulation system using the best approved processes: return of experience, updated standards and improvements validated by industrial realizations.In the following sub-section, the authors will present these regular improvements.The manufacturers which are using a conductor built with several strands in parallel must insulate each strand.This insulation layer has no voltage stress and can be achieved with a reasonable thickness.However, by forgetting its impact on the insulation wall, some misunderstanding can be done.Usually, the strands are not manufactured by the manufacturer of the motor and are supplied by external providers.Such providers sell the flat strands without any insulation to flat strands wrapped with one insulation tape and one enamel coating (Figure 10).Even if voltage between strands is low, designers should keep in mind that electrical stress can appear at the surface of the copper strands if the bonding is not well-done.Such a bonding does not have a great impact on low voltage machines, but this conclusion is not the same for high voltage machines.This layer is a sensitive layer because it is the deepest layer of the insulation system.The epoxy resin must penetrate from the surface of the insulation wall to this deepest area.If such a creepage is not satisfactory, this may initiate roots for partial discharges and can contaminate the insulation wall (Figure 11).That can be identified during partial discharge measurements.The PD activity related to this defect depends on the temperature.It decreases with temperature increase because the thermal expansion flattens the voids. must penetrate from the surface of the insulation wall to this deepest area.If such a creepage is not satisfactory, this may initiate roots for partial discharges and can contaminate the insulation wall (Figure 11).That can be identified during partial discharge measurements.The PD activity related to this defect depends on the temperature.It decreases with temperature increase because the thermal expansion flattens the voids.Another situation that can be encountered is a local rip between two strands which are in parallel (Figure 12).The lack of wire insulation leads to a short circuit between the strands and can be easily discovered during manufacturing.Before any soldering of strands, an insulation measurement between each strand can be done.If only two strands are not insulated, it means that a hole exists in the strand insulation. Turn Insulation, Criteria and Issue Such insulation may have a similarity with the strand insulation, especially if there is only one strand that composes the turn.However, the goal is absolutely not the same.Even if the turn-insulation surrounds the strand, if it is alone or the strands if they are multiple; its role is to withstand the surge voltage mentioned in Section 2.2.Its thickness is related to the overvoltage.The must penetrate from the surface of the insulation wall to this deepest area.If such a creepage is not satisfactory, this may initiate roots for partial discharges and can contaminate the insulation wall (Figure 11).That can be identified during partial discharge measurements.The PD activity related to this defect depends on the temperature.It decreases with temperature increase because the thermal expansion flattens the voids.Another situation that can be encountered is a local rip between two strands which are in parallel (Figure 12).The lack of wire insulation leads to a short circuit between the strands and can be easily discovered during manufacturing.Before any soldering of strands, an insulation measurement between each strand can be done.If only two strands are not insulated, it means that a hole exists in the strand insulation. Turn Insulation, Criteria and Issue Such insulation may have a similarity with the strand insulation, especially if there is only one strand that composes the turn.However, the goal is absolutely not the same.Even if the turn-insulation surrounds the strand, if it is alone or the strands if they are multiple; its role is to withstand the surge voltage mentioned in Section 2.2.Its thickness is related to the overvoltage.The Another situation that can be encountered is a local rip between two strands which are in parallel (Figure 12).The lack of wire insulation leads to a short circuit between the strands and can be easily discovered during manufacturing.Before any soldering of strands, an insulation measurement between each strand can be done.If only two strands are not insulated, it means that a hole exists in the strand insulation. must penetrate from the surface of the insulation wall to this deepest area.If such a creepage is not satisfactory, this may initiate roots for partial discharges and can contaminate the insulation wall (Figure 11).That can be identified during partial discharge measurements.The PD activity related to this defect depends on the temperature.It decreases with temperature increase because the thermal expansion flattens the voids.Another situation that can be encountered is a local rip between two strands which are in parallel (Figure 12).The lack of wire insulation leads to a short circuit between the strands and can be easily discovered during manufacturing.Before any soldering of strands, an insulation measurement between each strand can be done.If only two strands are not insulated, it means that a hole exists in the strand insulation. Turn Insulation, Criteria and Issue Such insulation may have a similarity with the strand insulation, especially if there is only one strand that composes the turn.However, the goal is absolutely not the same.Even if the turn-insulation surrounds the strand, if it is alone or the strands if they are multiple; its role is to withstand the surge voltage mentioned in Section 2.2.Its thickness is related to the overvoltage.The Turn Insulation, Criteria and Issue Such insulation may have a similarity with the strand insulation, especially if there is only one strand that composes the turn.However, the goal is absolutely not the same.Even if the turn-insulation surrounds the strand, if it is alone or the strands if they are multiple; its role is to withstand the surge voltage mentioned in Section 2.2.Its thickness is related to the overvoltage.The calculation of the turn-insulation surge in the electrical machine is similar to the calculation used in power transformers [31,32].Such electrical devices are more often subjected to the voltage surge than motors installed in closed areas equipped with surge capacitors.However, in the case of machines with a long lifetime, surge capacitors should be avoided due to their short lifetime.Therefore, it is mandatory to specify a lightning strike withstanding in requirements (mentioned in Section 2.2).Voltage surges and their modeling are taken into account for several decades [33,34].Several publications invite users to assess stress by providing an analytical resolution and graphic resolutions that give in a short time enough data to conclude on the issue.Methods used in power transformers to mitigate electrical stresses induced by lightning are not applicable to motors manufacturing.In motors manufacturing, the consecutive turns must be geometrically continuous and for transformer manufacturing, consecutive turns can be interleaved [33,34]. Transformer engineering considers such a problem as a capacitor network with some lumped inductors (Figure 13).Doing so, the voltage distribution along the windings can be investigated [33,34].It immediately appears that the first coils must withstand a high electrical stress.A large voltage gradient exists between the strands which are directly connected to the power line (Figure 14).Manufacturers have taken into account this event and found that this one is not as high as in transformer case.It remains that difference of potential is about several thousand Volt, whereas in continuous operation, this one is about ten volts.The solution usually employed is in increasing the thickness of the turn insulation. calculation of the turn-insulation surge in the electrical machine is similar to the calculation used in power transformers [31,32].Such electrical devices are more often subjected to the voltage surge than motors installed in closed areas equipped with surge capacitors.However, in the case of machines with a long lifetime, surge capacitors should be avoided due to their short lifetime.Therefore, it is mandatory to specify a lightning strike withstanding in requirements (mentioned in Section 2.2).Voltage surges and their modeling are taken into account for several decades [33,34].Several publications invite users to stress by providing an analytical resolution and graphic resolutions that give in a short time enough data to conclude on the issue.Methods used in power transformers to mitigate electrical stresses induced by lightning are not applicable to motors manufacturing.In motors manufacturing, the consecutive turns must be geometrically continuous and for transformer manufacturing, consecutive turns can be interleaved [33,34]. Transformer engineering considers such a problem as a capacitor network with some lumped inductors (Figure 13).Doing so, the voltage distribution along the windings can be investigated [33,34].It immediately appears that the first coils must withstand a high electrical stress.A large voltage gradient exists between the strands which are directly connected to the power line (Figure 14).Manufacturers have taken into account this event and found that this one is not as high as in transformer case.It remains that difference of potential is about several thousand Volt, whereas in continuous operation, this one is about ten volts.The solution usually employed is in increasing the thickness of the turn insulation.Insulation wall between the copper strand and the iron core is seen as distributed capacity (Cg), insulation between copper strands is also seen as distributed capacity (Cs) and even if distributed inductors are presented, they can be ignored in most cases. However, such a voltage surge is not a regular event and the number of voltage surge during the life of the machine is limited.For this reason, the surge test defined by the IEC standard is not a usual test used in acceptance tests.It is not recommended to use this test during the upkeep periods because it will destroy a weak insulation.Insulation wall between the copper strand and the iron core is seen as distributed capacity (Cg), insulation between copper strands is also seen as distributed capacity (Cs) and even if distributed inductors are presented, they can be ignored in most cases. Therefore, it is mandatory to specify a lightning strike withstanding in requirements (mentioned in Section 2.2).Voltage surges and their modeling are taken into account for several decades [33,34].Several publications invite users to assess stress by providing an analytical resolution and graphic resolutions that give in a short time enough data to conclude on the issue.Methods used in power transformers to mitigate electrical stresses induced by lightning are not applicable to motors manufacturing.In motors manufacturing, the consecutive turns must be geometrically continuous and for transformer manufacturing, consecutive turns can be interleaved [33,34]. Transformer engineering considers such a problem as a capacitor network with some lumped inductors (Figure 13).Doing so, the voltage distribution along the windings can be investigated [33,34].It immediately appears that the first coils must withstand a high electrical stress.A large voltage gradient exists between the strands which are directly connected to the power line (Figure 14).Manufacturers have taken into account this event and found that this one is not as high as in transformer case.It remains that difference of potential is about several thousand Volt, whereas in continuous operation, this one is about ten volts.The solution usually employed is in increasing the thickness of the turn insulation.Insulation wall between the copper strand and the iron core is seen as distributed capacity (Cg), insulation between copper strands is also seen as distributed capacity (Cs) and even if distributed inductors are presented, they can be ignored in most cases. However, such a voltage surge is not a regular event and the number of voltage surge during the life of the machine is limited.For this reason, the surge test defined by the IEC standard is not a usual test used in acceptance tests.It is not recommended to use this test during the upkeep periods because it will destroy a weak insulation.Here, the neutral of the windings is not grounded and in assuming α = 10, it immediately appears that the difference of potential between consecutive turns strongly evolves near the entrance point and can induce a breakdown between the first turns. However, such a voltage surge is not a regular event and the number of voltage surge during the life of the machine is limited.For this reason, the surge test defined by the IEC standard is not a usual test used in acceptance tests.It is not recommended to use this test during the upkeep periods because it will destroy a weak insulation. One defect in turn-insulation usually provides a turn to turn fault.Two consecutive turns are short circuited (Figure 15).They act as a standalone coil with one turn that disturbs the magnetic flux circulation.A higher current is induced in this unwanted coil and increases the temperature of the winding.This increase shortens the lifespan of the windings.Such a defect can be detected if the line current is monitored.A small imbalance appears.It can also be discovered during regular servicing.The resistance of the circuit containing the turn-to-turn fault shows a decrease in its value.Such a decrease is low, around one percent or less (Temperature strongly affects the measurement).However, this topic is always in the interest of researchers and recent publications highlight this issue [35].In this publication, which was published in 2013, the authors investigate methods that are able to detect the turn-to-turn faults.Each method has advantage and drawbacks.Drawbacks are not always considered as drawbacks compared to the domain of investigation.The off line/on line criteria, operator skill and relevant information are examined.Unfortunately, the impact of the machine lifespan is ignored.Power stations operators are aware that measuring devices have a short lifespan compared to the monitored motor.That becomes an unrealistic situation where the measuring system is out of order before the monitored device.As a result, methods based on less skilled operators which required high-end and automated measuring devices, installed within the machine are not adapted.Methods using external measurement systems which can be replaced by another acquisition system providing similar data are preferred.Such a choice implies high skill operators. Machines 2017, 5, 7 20 of 34 One defect in turn-insulation usually provides a turn to turn fault.Two consecutive turns are short circuited (Figure 15).They act as a standalone coil with one turn that disturbs the magnetic flux circulation.A higher current is induced in this unwanted coil and increases the temperature of the winding.This increase shortens the lifespan of the windings.Such a defect can be detected if the line current is monitored.A small imbalance appears.It can also be discovered during regular servicing.The resistance of the circuit containing the turn-to-turn fault shows a decrease in its value.Such a decrease is low, around one percent or less (Temperature strongly affects the measurement).However, this topic is always in the interest of researchers and recent publications highlight this issue [35].In this publication, which was published in 2013, the authors investigate methods that are able to detect the turn-to-turn faults.Each method has advantage and drawbacks.Drawbacks are not always considered as drawbacks compared to the domain of investigation.The off line / on line criteria, operator skill and relevant information are examined.Unfortunately, the impact of the machine lifespan is ignored.Power stations operators are aware that measuring devices have a short lifespan compared to the monitored motor.That becomes an unrealistic situation where the measuring system is out of order before the monitored device.As a result, methods based on less skilled operators which required high-end and automated measuring devices, installed within the machine are not adapted.Methods using external measurement systems which can be replaced by another acquisition system providing similar data are preferred.Such a choice implies high skill operators. Figure 15.In this windings having 5 turns, one turn is short-circuited (In red) and it provided an undesirable standalone coil. Main Insulation in the Slot, Criteria and Issues Such insulation must be designed to withstand the main ageing stress which remains: the thermal ageing.The mica and glass tapes are highly dependent on the toughness of the epoxy resin.The first goal of the designer is to limit the development of any void which is the basic roots of electrical deterioration.Such a result strongly depends of the manufacturing process (Figure 16).During the first step of the VPI process, the stator core and the windings must be dried (Warmed around 50 °C and up to 110 °C , temperature depends on used materials).Water or moisture cannot be accepted in any part.Even if the stator must cool, a dry stator must not stay in an uncontrolled atmosphere.Such a situation can lead to moisture absorption.For the second step, the stator and its windings are introduced into a vacuum chamber.The chamber is closed and an extreme low pressure is achieved (the ideal situation is a real void).At the same time, the resin is prepared; epoxy resin must be free of dissolved gas.The epoxy resin is introduced into the vacuum chamber and a perfect filling of the windings is expected.To achieve such a result, a pressure is imposed when the stator is flooded with resin.The last step is curing that will accelerate the polymerization.Such a curing should be done using a chart showing the time sequences and temperatures.One VPI session is not enough to ensure a long lifespan for the insulation system.The surface of the insulation must be smooth to prevent corona discharges and unwanted voids which can remain in the surface should be removed.Two or three VPI sessions can be performed to increase the lifespan.Such an increase must be observed as a reduction of deterioration sources.Insulation wall ageing is also Main Insulation in the Slot, Criteria and Issues Such insulation must be designed to withstand the main ageing stress which remains: the thermal ageing.The mica and glass tapes are highly dependent on the toughness of the epoxy resin.The first goal of the designer is to limit the development of any void which is the basic roots of electrical deterioration.Such a result strongly depends of the manufacturing process (Figure 16).During the first step of the VPI process, the stator core and the windings must be dried (Warmed around 50 • C and up to 110 • C, temperature depends on used materials).Water or moisture cannot be accepted in any part.Even if the stator must cool, a dry stator must not stay in an uncontrolled atmosphere.Such a situation can lead to moisture absorption.For the second step, the stator and its windings are introduced into a vacuum chamber.The chamber is closed and an extreme low pressure is achieved (the ideal situation is a real void).At the same time, the resin is prepared; epoxy resin must be free of dissolved gas.The epoxy resin is introduced into the vacuum chamber and a perfect filling of the windings is expected.To achieve such a result, a pressure is imposed when the stator is flooded with resin.The last step is curing that will accelerate the polymerization.Such a curing should be done using a chart showing the time sequences and temperatures.One VPI session is not enough to ensure a long lifespan for the insulation system.The surface of the insulation must be smooth to prevent corona discharges and unwanted voids which can remain in the surface should be removed.Two or three VPI sessions can be performed to increase the lifespan.Such an increase must be observed as a reduction of deterioration sources.Insulation wall ageing is also related to the voltage stresses.A slight decrease in this parameter has a positive influence.Two methods can be employed: a lower tightening in mica-tape can increase the insulation wall thickness or addition of layers to increase the insulation thickness.Both solutions have drawbacks.By adding layers, the resin may not creep within all the layers, especially into turn-to-turn insulations, which are the deepest layers.Unclamped insulation has no advantage as the number of mica layer is unchanged and a tape folding or tape slippage can occur during impregnation.In fact, the solution used remains a manufacturer's choice related to its design and manufacturing process.One important aspect when several VPI sessions are performed: the curing process is repeated several times.The first resin coating will undergo several curing sessions; it must remain uncured at the end of the first session to avoid over-curing. Machines 2017, 5, 7 21 of 34 related to the voltage stresses.A slight decrease in this parameter has a positive influence.Two methods can be employed: a lower tightening in mica-tape can increase the insulation wall thickness or addition of layers to increase the insulation thickness.Both solutions have drawbacks.By adding layers, the resin may not creep within all the layers, especially into turn-to-turn insulations, which are the deepest layers.Unclamped insulation has no advantage as the number of mica layer is unchanged and a tape folding or tape slippage can occur during impregnation.In fact, the solution used remains a manufacturer's choice related to its design and manufacturing process.One important aspect when several VPI sessions are performed: the curing process is repeated several times.The first resin coating will undergo several curing sessions; it must remain uncured at the end of the first session to avoid over-curing.A void free insulation will not be easily achieved.Tighten the mica layers will not allow the flowing of the resin and loose layers will result in folds.One idea was suggested many years ago [36,37], the first results were presented in 2012 [38] and reaffirmed in 2014 [39].Since free space is mandatory between the layers to allow the resin flowing.These free spaces can initiate treeing in the insulation wall.Treeing follows the mica layer (Figure 17), cannot puncture it and immediately bypasses it as soon as it reaches the boundary [40].Therefore, an increase in treeing resistance of the resin can be achieved by adding SiO2 components.By using a variety of specially processed components, i.e., nanocomposites, the thermal conductibility of the insulation wall increases (up to 3 times).This leads to better thermal dissipation.The Tip-Up, which is a parameter of the good achievement of the VPI, also decrease.This value is less than 0.3 while in usual insulation wall it remains greater than 0.2 and less than 1.Voltage endurance tests show an improvement in electrical lifetime.This first step heralds a major improvement in the insulation system.The authors suggest the second step: the electrical, thermal and mechanical qualification tests of this new insulation system.A void free insulation will not be easily achieved.Tighten the mica layers will not allow the flowing of the resin and loose layers will result in folds.One idea was suggested many years ago [36,37], the first results were presented in 2012 [38] and reaffirmed in 2014 [39].Since free space is mandatory between the layers to allow the resin flowing.These free spaces can initiate treeing in the insulation wall.Treeing follows the mica layer (Figure 17), cannot puncture it and immediately bypasses it as soon as it reaches the boundary [40].Therefore, an increase in treeing resistance of the resin can be achieved by adding SiO 2 components.By using a variety of specially processed components, i.e., nanocomposites, the thermal conductibility of the insulation wall increases (up to 3 times).This leads to better thermal dissipation.The Tip-Up, which is a parameter of the good achievement of the VPI, also decrease.This value is less than 0.3 while in usual insulation wall it remains greater than 0.2 and less than 1.Voltage endurance tests show an improvement in electrical lifetime.This first step heralds a major improvement in the insulation system.The authors suggest the second step: the electrical, thermal and mechanical qualification tests of this new insulation system. Machines 2017, 5, 7 21 of 34 related to the voltage stresses.A slight decrease in this parameter has a positive influence.Two methods can be employed: a lower tightening in mica-tape can increase the insulation wall thickness or addition of layers to increase the insulation thickness.Both solutions have drawbacks.By adding layers, the resin may not creep within all the layers, especially into turn-to-turn insulations, which are the deepest layers.Unclamped insulation has no advantage as the number of mica layer is unchanged and a tape folding or tape slippage can occur during impregnation.In fact, the solution used remains a manufacturer's choice related to its design and manufacturing process.One important aspect when several VPI sessions are performed: the curing process is repeated several times.The first resin coating will undergo several curing sessions; it must remain uncured at the end of the first session to avoid over-curing.A void free insulation will not be easily achieved.Tighten the mica layers will not allow the flowing of the resin and loose layers will result in folds.One idea was suggested many years ago [36,37], the first results were presented in 2012 [38] and reaffirmed in 2014 [39].Since free space is mandatory between the layers to allow the resin flowing.These free spaces can initiate treeing in the insulation wall.Treeing follows the mica layer (Figure 17), cannot puncture it and immediately bypasses it as soon as it reaches the boundary [40].Therefore, an increase in treeing resistance of the resin can be achieved by adding SiO2 components.By using a variety of specially processed components, i.e., nanocomposites, the thermal conductibility of the insulation wall increases (up to 3 times).This leads to better thermal dissipation.The Tip-Up, which is a parameter of the good achievement of the VPI, also decrease.This value is less than 0.3 while in usual insulation wall it remains greater than 0.2 and less than 1.Voltage endurance tests show an improvement in electrical lifetime.This first step heralds a major improvement in the insulation system.The authors suggest the second step: the electrical, thermal and mechanical qualification tests of this new insulation system.The origin of the treeing is located on both surfaces of the insulation wall (outside and inside).Any voids in the insulation will accelerate the development of such a defect.Inside the wall, too tighten layer is forbidden and outside, the treeing root is located on the latest layers of the insulation wall.A multiple sequence of VPI can smooth the surface and fill any unwanted voids.Moreover, a conductive layer is added to suppress corona discharges in the slot.In all case the ionization of the surrounding air induces Ozone.This gaseous material, even in low quantities, is highly reactive and reacts with the epoxy resin by breaking the cross-linked network. Main Insulation in the End Winding Area, Criteria and Issue The machines with high mechanical power usually use form-wound coils.This implies a specific situation outside the stator core and in particular in the end windings areas.The strands are bent to fit geometrical requirements.The parts of the winding in the slot must follow a straight line and must not undergo any mechanical stress due to incorrect curvature.Hence, the application of an insulation tape is not easy.The geometrical shape of the end winding is complicated (Figure 18).The application of an insulation tape would not be as regular as it is shown in Figure 17.A handmade application is often required and provides degradations in the insulation wall quality.Even if the manufacturing price is cheaper, the automated process cannot provide, for these areas, a similar quality of application.The end windings are parts where many operations are already performed by hand: The soldering of the connection rings (One ring per phase), the soldering of the connection between consecutive coils and the soldering of the power cables.Applying insulating tape by hand is more realistic and allows a monitored increase in the thickness of the insulation wall.It does not imply an increase in the insulation strength; the folds during the mica-tape application cannot be avoided and the clamping of the tape on the coil is not regular.Thereafter, it is usual to find two additional layers or more in the end windings.Even if this winding part is not in the slot, the surface of the insulation wall has to be as smooth as possible to avoid corona discharges and ozone generation.This effect is raised in the end area since the two main elements concerned are adjacent coils (Figure 19).Such a defect can be monitored by observing light emission during a high potential test.This degradation scheme is often cited in articles concerning the diagnosis of machine outages.Such a drawback is a serious destructive phenomenon.It had to be taken into account.Patents on its mitigation can be found, for example, US 2007/0170804 A1 suggests the use of an additional mica-tape armor to mitigate such a destructive effect [41]. Machines 2017, 5, 7 22 of 34 The origin of the treeing is located on both surfaces of the insulation wall (outside and inside).Any voids in the insulation will accelerate the development of such a defect.Inside the wall, too tighten layer is forbidden and outside, the treeing root is located on the latest layers of the insulation wall.A multiple sequence of VPI can smooth the surface and fill any unwanted voids.Moreover, a conductive layer is added to suppress corona discharges in the slot.In all case the ionization of the surrounding air induces Ozone.This gaseous material, even in low quantities, is highly reactive and reacts with the epoxy resin by breaking the cross-linked network. Main Insulation in the End Winding Area, Criteria and Issue The machines with high mechanical power usually use form-wound coils.This implies a specific situation outside the stator core and in particular in the end windings areas.The strands are bent to fit geometrical requirements.The parts of the winding in the slot must follow a straight line and must not undergo any mechanical stress due to incorrect curvature.Hence, the application of an insulation tape is not easy.The geometrical shape of the end winding is complicated (Figure 18).The application of an insulation tape would not be as regular as it is shown in Figure 17.A handmade application is often required and provides degradations in the insulation wall quality.Even if the manufacturing price is cheaper, the automated process cannot provide, for these areas, a similar quality of application.The end windings are parts where many operations are already performed by hand: The soldering of the connection rings (One ring per phase), the soldering of the connection between consecutive coils and the soldering of the power cables.Applying insulating tape by hand is more realistic and allows a monitored increase in the thickness of the insulation wall.It does not imply an increase in the insulation strength; the folds during the mica-tape application cannot be avoided and the clamping of the tape on the coil is not regular.Thereafter, it is usual to find two additional layers or more in the end windings.Even if this winding part is not in the slot, the surface of the insulation wall has to be as smooth as possible to avoid corona discharges and ozone generation.This effect is raised in the end winding area since the two main elements concerned are adjacent coils (Figure 19).Such a defect can be monitored by observing light emission during a high potential test.This degradation scheme is often cited in articles concerning the diagnosis of machine outages.Such a drawback is a serious destructive phenomenon.It had to be taken into account.Patents on its mitigation can be found, for example, US 2007/0170804 A1 suggests the use of an additional mica-tape armor to mitigate such a destructive effect [41].Manufacturers have spent many decades finding a solution to mitigate the voltage stresses associated with the interruption of the conductive armor tape outside the slot.Historically, the first solution consisted of using a coating containing silicon particles.In doing so, the electrical resistance of the upper layer is less important than an insulating layer and it evolves with electric field intensity.The electrical potential can be more evenly distributed and the magnitude of the electric field is mitigated everywhere.The use of such a solution namely stress-grading has a dramatic impact on the lifespan.During the first years of operation, the stress-grading layer behaves as expected, but the varnish-based coating, breaks up step by step and after a decade, this layer will be partially cracked.Such a defect is now solved by using stress-grading tapes.Nevertheless, Stress-grading is always in the interest of the researchers as its aging is not well-handled.Supply voltages increase steadily and manufacturers look after more adapted materials.Nowadays, electrical machines may be driven by variable speed drives and the shape of the applied voltage induces high levels of harmonics that age the stress-grading tape.Investigations on another type of material are done.The traditional SiC layer which is a resistive material will probably be replaced by capacitive layers.Manufacturers have spent many decades finding a solution to mitigate the voltage stresses associated with the interruption of the conductive armor tape outside the slot.Historically, the first solution consisted of using a coating containing silicon particles.In doing so, the electrical resistance of the upper layer is less important than an insulating layer and it evolves with electric field intensity.The electrical potential can be more evenly distributed and the magnitude of the electric field is mitigated everywhere.The use of such a solution namely stress-grading has a dramatic impact on the lifespan.During the first years of operation, the stress-grading layer behaves as expected, but the varnish-based coating, breaks up step by step and after a decade, this layer will be partially cracked.Such a defect now solved by using stress-grading tapes.Nevertheless, Stress-grading is always in the interest of the researchers as its aging is not well-handled.Supply voltages increase steadily and manufacturers look after more adapted materials.Nowadays, electrical machines may be driven by variable speed drives and the shape of the applied voltage induces high levels of harmonics that age the stress-grading tape.Investigations on another type of material are done.The traditional SiC layer which is a resistive material will probably be replaced by capacitive layers. The stress-grading layer impacts acceptance tests, especially when the criteria are based on the linearity behavior of the insulating wall.Manufacturers must integrate this issue.They must do tests that can take into account the non-linear effect of stress-grading.This can be done for half coils used in hydro-generators or large alternators.These coils are assembled on site; they can be tested separately during the acceptance tests.The Effect of the stress-grading is easily mitigated by drifting the leakage current associated with this layer.The guard electrodes drift the leakage current and insulation wall measurement is not polluted (Figure 20).Guard electrodes are not an answer for all cases.They cannot be used on machines using form-wound coils.The stress-grading layer impacts acceptance tests, especially when the criteria are based on the linearity behavior of the insulating wall.Manufacturers must integrate this issue.They must do tests that can take into account the non-linear effect of stress-grading.This can be done for half coils used in hydro-generators or large alternators.These coils are assembled on site; they can be tested separately during the acceptance tests.The Effect of the stress-grading is easily mitigated by drifting the leakage current associated with this layer.The guard electrodes drift the leakage current and insulation wall measurement is not polluted (Figure 20).Guard electrodes are not an answer for all cases.They cannot be used on machines using form-wound coils.Manufacturers have spent many decades finding a solution to mitigate the voltage stresses associated with the interruption of the conductive armor tape outside the slot.Historically, the first solution consisted of using a coating containing silicon particles.In doing so, the electrical resistance of the upper layer is less important than an insulating layer and it evolves with electric field intensity.The electrical potential can be more evenly distributed and the magnitude of the electric field is mitigated everywhere.The use of such a solution namely stress-grading has a dramatic impact on the lifespan.During the first years of operation, the stress-grading layer behaves as expected, but the varnish-based coating, breaks up step by step and after a decade, this layer will be partially cracked.Such a defect is now solved by using stress-grading tapes.Nevertheless, Stress-grading is always in the interest of the researchers as its aging is not well-handled.Supply voltages increase steadily and manufacturers look after more adapted materials.Nowadays, electrical machines may be driven by variable speed drives and the shape of the applied voltage induces high levels of harmonics that age the stress-grading tape.Investigations on another type of material are done.The traditional SiC layer which is a resistive material will probably be replaced by capacitive layers. The stress-grading layer impacts acceptance tests, especially when the criteria are based on the linearity behavior of the insulating wall.Manufacturers must integrate this issue.They must do tests that can take into account the non-linear effect of stress-grading.This can be done for half coils used in hydro-generators or large alternators.These coils are assembled on site; they can be tested separately during the acceptance tests.The Effect of the stress-grading is easily mitigated by drifting the leakage current associated with this layer.The guard electrodes drift the leakage current and insulation wall measurement is not polluted (Figure 20).Guard electrodes are not an answer for all cases.They cannot be used on machines using form-wound coils.Nonlinear behaviors also influence other acceptance tests.The Tip-Up test may be impacted by the introduction of stress-grading tapes.In this test, the insulation resistance is measured at two different voltages.About some proprietary acceptance criteria, the first measurement is done at 0.25 Vn and the second measurement is done at Vn (Line-Ground).For these two situations, the power losses of the insulation system are measured.When the voltage is low, the stress-grading layers lead to no leakage current.By increasing the voltage to Vn, stress-grading leakages currents appear and Nonlinear behaviors also influence other acceptance tests.The Tip-Up test may be impacted by the introduction of stress-grading tapes.In this test, the insulation resistance is measured at two different voltages.About some proprietary acceptance criteria, the first measurement is done at 0.25 Vn and the second measurement is done at Vn (Line-Ground).For these two situations, the power losses of the insulation system are measured.When the voltage is low, the stress-grading layers lead to no leakage current.By increasing the voltage to Vn, stress-grading leakages currents appear and may provide a higher level of power losses.Newly manufactured machines that use epoxy-mica insulation and SiC stress-grading often have a Tip-Up greater than 0.6 and less than 1 for the whole winding (This value is lower for a standalone coil).Therefore, customers should not indicate in their acceptance criteria a lower value for the Tip-Up.Fortunately, the standard describes the measurement process also describe a favorable situation.In this standard, only one coil is powered.It means that the voltage between a powered coil and all the other coils is limited to the phase-to-neutral voltage.In operation, the voltage between two different coils can reach the phase-to-phase voltage. A long lifespan involves regular stress-grading improvements.Stress-grading varnish is no longer used in high power motors.Tapes are preferred.Many technologies exist in stress-grading tape.There are the Resistive Stress-Grading technology (RSG) and the Capacitive Stress-Grading technology (CSG).The CSG is not suitable for electrical machines.About RSG, two types of materials can be used.Silicon carbide (SiC) or Zinc oxide-Varistor (ZnO) can be employed [42,43].SiC tapes are in use in electrical machines, whereas ZnO is used in insulators [44].Stress-grading tapes have characteristics that evolve with the frequency and the strength of the electrical field.Its resistivity can be expressed using Equation ( 5) [45].The length and thickness of the stress-grading can be chosen with numerical simulations.Equilibrium must be done between the levels of electrical losses, the highest values of electric fields and the leakage currents [46]. where E is electric field strength; f is frequency; ρ and ρ 0 are material resistivity.Modeling using discrete elements has advantages.The elementary parts of the stress-grading layer may have monitored properties.Unfortunately, the geometry of the stress-grading, and in particular its thickness, leads to programming problems using the finite elements method [47,48].Practically, designers want to know the highest value of the electrical field, the thermal hot spot location, the expected leakage current.Any excess of these values will have a negative impact on aging.3D simulations, which include all coils, are not realistic.The solving times will be too high.Some authors suggest simplifications in solving [47].Thereafter, 2D simulations can provide enough information to study the stress-grading layer effect.Figures 21 and 22 show two situations that will be used as working cases.In the first case (Figures 21a and 22a), there is no stress-grading and the 2D axisymmetric simulation indicates a high level of electrical field at the end of the conductive armor tape.In the second case (Figures 21b and 22b), a conductive layer with constant conductivity is added.The hot spot in electrical filed has disappeared.Nowadays, most of simulations only take into account one strand [49]. Such a result has already been introduced in Chapter 2. The mitigation of the electrical field must also be performed along the tangential axis.Corona discharges along such a surface can generate Ozone.In Figure 23, two simulations are performed.The first one is done with a stress-grading layer having a constant resistivity.The second one is done without stress-grading.It appears that stress-grading mitigates the tangential electrical field to a lower value.Researchers also studied the thermal effect on the SiC stress-grading.They found an incredible increase in leakage current.A factor 12 was measured [50].Therefore, even if stress-grading has a positive impact in the local mitigation of electrical field, investigations must be done by monitoring its own aging.Destruction of a stress-grading layer can lead to a failure in a short time.Unfortunately, such a problem must be coupled with thermal problem and multiphysic simulations are expected [51].The manufacturers include in their experimental studies the stress-grading tape [52]. tape.In the second case (Figures 21b and 22b), a conductive layer with constant conductivity is added.The hot spot in electrical filed has disappeared.Nowadays, most of simulations only take into account one strand [49].Such a result has already been introduced in Chapter 2. The mitigation of the electrical field must also be performed along the tangential axis.Corona discharges along such a surface can generate Ozone.In Figure 23, two simulations are performed.The first one is done with a stress-grading layer having a constant resistivity.The second one is done without stress-grading.Knowing that corona discharges can be found if tangential electrical field is greater than 400 V/mm, it appears that stress-grading mitigates the tangential electrical field to a lower value.Researchers also studied the thermal effect on the SiC stress-grading.They found an incredible increase in leakage current.A factor 12 was measured [50].Therefore, even if stress-grading has a positive impact in the local mitigation of electrical field, investigations must be done by monitoring its own aging.Destruction of a stress-grading layer can lead to a failure in a short time.Unfortunately, such a problem must be coupled with thermal problem and multiphysic simulations are expected [51].The manufacturers include in their experimental studies the stress-grading tape [52].tape.In the second case (Figures 21b and 22b), a conductive layer with constant conductivity is added.The hot spot in electrical filed has disappeared.Nowadays, most of simulations only take into account one strand [49].Such a result has already been introduced in Chapter 2. The mitigation of the electrical field must also be performed along the tangential axis.Corona discharges along such a surface can generate Ozone.In Figure 23, two simulations are performed.The first one is done with a stress-grading layer having a constant resistivity.The second one is done without stress-grading.Knowing that corona discharges can be found if tangential electrical field is greater than 400 V/mm, it appears that stress-grading mitigates the tangential electrical field to a lower value.Researchers also studied the thermal effect on the SiC stress-grading.They found an incredible increase in leakage current.A factor 12 was measured [50].Therefore, even if stress-grading has a positive impact in the local mitigation of electrical field, investigations must be done by monitoring its own aging.Destruction of a stress-grading layer can lead to a failure in a short time.Unfortunately, such a problem must be coupled with thermal problem and multiphysic simulations are expected [51].The manufacturers include in their experimental studies the stress-grading tape [52].As a conclusion, in this chapter, the authors go deeper in the current standards and highlight the fact that such standards are not able to close the debate about the acceptance tests for qualifying the machines with long lifespan.Nevertheless, standards help designers and customers by providing them methods and trends in measurements.Step by step, authors have taken each element constituting the insulation wall and have investigated about the issues and the possible answers.It appears that the number of issues is increasing; methods to solve them are more and more complex.The finite elements simulations take more and more time and are not able to take into account all the parameter.Fundamental issues do not change; excesses in electrical fields, abnormal temperatures, and unusual mechanical vibrations are always topics for researchers.Thereafter, designers have the difficult task to gather all the data and have to decide which road map would be The tangential electrical field (E t ) along the main wall insulation is plot.With stress-grading, the tangential electrical field is under 3.10 5 V/m (300 V/mm).Without stress-grading the tangential electrical field can reach 4.10 6 V/m (4000 V/mm). As a conclusion, in this chapter, the authors go deeper in the current standards and highlight the fact that such standards are not able to close the debate about the acceptance tests for qualifying the machines with long lifespan.Nevertheless, standards help designers and customers by providing them methods and trends in measurements.Step by step, authors have taken each element constituting the insulation wall and have investigated about the issues and the possible answers.It appears that the number of issues is increasing; methods to solve them are more and more complex.The finite elements simulations take more and more time and are not able to take into account all the parameter.Fundamental issues do not change; excesses in electrical fields, abnormal temperatures, and unusual mechanical vibrations are always topics for researchers.Thereafter, designers have the difficult task to gather all the data and have to decide which road map would be followed for one enhancement within the insulation system Designers are not alone and manufacturers who provide high lifespan machines have background knowledge and a long experimental past.Even if such a fact cannot be examined in deep for each manufacturer as they want to keep their skill, it remains that open studies will bring enough data to understand the main issues: the mitigation of thermal, electrical and mechanical stresses. Knowledge Base and its Influence on Insulation Process The evolutions of the insulation wall are regularly presented by researchers.The articles cited show that innovations which lead to a real new insulation system take a long time to be integrated.In 1959, Fuji-Electric thought that the use of Mica could be avoided.The following decades will change this idea.In 2002, the nanotechnology with its Si0 2 particles suggests an improvement in the performances.Nowadays, this improvement is not regularly used and has not led to a new insulation system.The insulation system evolutions are also related to the evolution of the manufacturing process.Patents can be found about insulation systems that have not been developed.Although calculations or simulations can provide important information on the degradation factors that act on the insulation system, its behavior during operation remains the first evaluation criterion.EPRI provides several studies with exhaustive analysis [53,54].It is certain that lifespan is a multifactor degradation.In 1978, one article begins with these words: Insulation Aging a Historical and Critical Review [55].In 2014, such multifactor aging is still being studied [56].Combining ageing factors remains a vast domain as the combining method has a great influence on the lifespan assessment [55].Nevertheless, even if standards exist, they are out of the scope.For example, IEC 62101 has gathered the electrical and the thermal stresses but is focused on voltage supply less than 1000 V. Partial Discharges Mechanism A bubble free insulation is an ideal insulation system that will never exist.Bubbles are dangerous; they initiate degradations of the epoxy resin [57].The bubbles in insulating materials can be created during the curing phase.It is impossible to remove completely them in the polymeric materials.The bubbles may have a diameter of a few microns.For example, such small bubbles are found in cross-linked polyethylene of cables insulation.By using VPI, remaining bubbles are assumed to be empty.Such voids have high breakdown voltage (10 to 1000 kV/mm).Unfortunately, bubbles are not real voids; they may contain gases which have lower permittivity and lower breakdown voltage.Smallest bubbles are the first bubbles that generate partial discharges (Figure 24). The partial discharges are phenomena that are initiated during the last step of the manufacturing process.Small voids can be mitigated with an appropriate process.Doing the VPI with care can prevent the entrance of polluted gas.The phenomenon can be described using Figure 25.A sinusoidal voltage is applied to the two boundaries.Without void, insulation is a capacitor (C).When the voids are present, three capacitors appear the most important is: capacitor linked to the void (Cv).In this area, electrical field can overcome breakdown voltage.A leakage current may appear and can disappear after a decrease in the applied voltage (red curve) or be maintained during the positive part of applied voltage (green curve).The electrical discharge is detected outside the insulation wall by an apparatus and after calibration of the signal, the measurement of the energy expended during these phenomenon provides information on the insulation wall quality.For more than 50 years, companies have performed Partial Discharge Testing on electrical apparatus as part of predictive maintenance programs; skilled users can interpret the signals and point out which part of the insulation is subject to defect [52,58]. A bubble free insulation is an ideal insulation system that will never exist.Bubbles are dangerous; they initiate degradations of the epoxy resin [57].The bubbles in insulating materials can be created during the curing phase.It is impossible to remove completely them in the polymeric materials.The bubbles may have a diameter of a few microns.For example, such small bubbles are found in cross-linked polyethylene of cables insulation.By using VPI, remaining bubbles are assumed to be empty.Such voids have high breakdown voltage (10 to 1000 kV/mm).Unfortunately, bubbles are not real voids; they may contain gases which have lower permittivity and lower breakdown voltage.Smallest bubbles are the first bubbles that generate partial discharges (Figure 24).The partial discharges are phenomena that are initiated during the last step of the manufacturing process.Small voids can be mitigated with an appropriate process.Doing the VPI with care can prevent the entrance of polluted gas.The phenomenon can be described using Figure 25.A sinusoidal voltage is applied to the two boundaries.Without void, insulation is a capacitor (C).When the voids are present, three capacitors appear the most important is: capacitor linked to the void (Cv).In this area, electrical field can overcome breakdown voltage.A leakage current may appear and can disappear after a decrease in the applied voltage (red curve) or be maintained during the positive part of applied voltage (green curve).The electrical discharge is detected outside the insulation wall by an apparatus and after calibration of the signal, the measurement of the energy expended during these phenomenon provides information on the insulation wall quality.For more than 50 years, companies have performed Partial Discharge Testing on electrical apparatus as part of predictive maintenance programs; skilled users can interpret the signals and point out which part of the insulation is subject to defect [52,58].The partial discharges are phenomena that are initiated during the last step of the manufacturing process.Small voids can be mitigated with an appropriate process.Doing the VPI with care can prevent the entrance of polluted gas.The phenomenon can be described using Figure 25.A sinusoidal voltage is applied to the two boundaries.Without void, insulation is a capacitor (C).When the voids are present, three capacitors appear the most important is: capacitor linked to the void (Cv).In this area, electrical field can overcome breakdown voltage.A leakage current may appear and can disappear after a decrease in the applied voltage (red curve) or be maintained during the positive part of applied voltage (green curve).The electrical discharge is detected outside the insulation wall by an apparatus and after calibration of the signal, the measurement of the energy expended during these phenomenon provides information on the insulation wall quality.For more than 50 years, companies have performed Partial Discharge Testing on electrical apparatus as part of predictive maintenance programs; skilled users can interpret the signals and point out which part of the insulation is subject to defect [52,58].Even if the measurement of partial discharges concerns the number of discharges and the associated energy, the parameter used in acceptance tests only concerns the total energy associated to these discharges.At low voltage, only small bubbles are subject to partial discharges.At rated voltage, large bubbles light up and provide high level of partial discharges.Large bubbles are undesirable as they will provide an unacceptable level of partial discharges within a few years and will lead to a breakdown.However, the effects of large bubbles may be incorporated into acceptance criterion.Customers can overcome this issue by using the Tip-Up criterion.Tip-Up focus on the equivalent circuit, R + C, of the insulation wall (Figure 26).The resistance takes into account the losses which are essentially resistive losses.Partial discharges may increase resistive losses with their leakage currents.Since large voids will not be activated at 0.2 Un, the large voids effects will pollute the measurement performed at Un. Therefore, the insulation wall without large void has a low Tip-Up and an increase in Tip-Up may be induced by large voids (Figure 26) [59,60]. equivalent circuit, R + C, of the insulation wall (Figure 26).The resistance takes into account the losses which are essentially resistive losses.Partial discharges may increase resistive losses with their leakage currents.Since large voids will not be activated at 0.2 Un, the large voids effects will pollute the measurement performed at Un. Therefore, the insulation wall without large void has a low Tip-Up and an increase in Tip-Up may be induced by large voids (Figure 26) [59,60].The partial discharges in bubbles are a mechanism which slowly degrades the epoxy resin.It means that a low level of partial discharge is a good indicator of a healthy insulation wall.One important phenomenon is the effect of the gas in the voids.For a perfect insulation system, there will be no gas in the voids.Such a requirement is not straightforward, especially in areas which are close to the surface where any cracks may break the sealing.Therefore, the phenomenon that must be taken into account is the behavior of a small bubble containing air.Such a work has initiated lots of publications and some of them provide enough ideas to initiate a discussion.The partial discharge mechanism has been fortunately studied using experimental validations.In 1993, a published thesis provides an explanation in the evolution of voids under electrical stress [61].The author has investigated two study cases.In both case, bubbles are filled by gas.Even if studied bubbles are larger than bubbles present in a real insulation wall, voids with a height of about 30 µ m are still representative.In one case, only nitrogen filled the void and in a second case nitrogen and a small amount of oxygen.In both case, the voltage stress is over the inception voltage, the first type of discharge that is observed is the streamer discharge type.The streamer discharge is a discharge which is characterized by a peak of energy with a short duration, about one nanosecond.The area affected by this discharge is limited to a small part of the bubble.It is during this step that an important evolution appears.An acid was produced during this step and spread on the surface of the bubble.That is mostly oxalic-acid.Such a product is important as it initiates the second step.After a short time of aging, less than 2 h, the author identifies a new behavior.The discharge process is now a slower process (such as Townsend discharge process).Even if this evolution is characterized by a lower peak of energy, its duration is greater and the phenomenon can affect the The partial discharges in bubbles are a mechanism which slowly degrades the epoxy resin.It means that a low level of partial discharge is a good indicator of a healthy insulation wall.One important phenomenon is the effect of the gas in the voids.For a perfect insulation system, there will be no gas in the voids.Such a requirement is not straightforward, especially in areas which are close to the surface where any cracks may break the sealing.Therefore, the phenomenon that must be taken into account is the behavior of a small bubble containing air.Such a work has initiated lots of publications and some of them provide enough ideas to initiate a discussion.The partial discharge mechanism has been fortunately studied using experimental validations.In 1993, a published thesis provides an explanation in the evolution of voids under electrical stress [61].The author has investigated two study cases.In both case, bubbles are filled by gas.Even if studied bubbles are larger than bubbles present in a real insulation wall, voids with a height of about 30 µm are still representative.In one case, only nitrogen filled the void and in a second case nitrogen and a small amount of oxygen.In both case, the voltage stress is over the inception voltage, the first type of discharge that is observed is the streamer discharge type.The streamer discharge is a discharge which is characterized by a peak of energy with a short duration, about one nanosecond.The area affected by this discharge is limited to a small part of the bubble.It is during this step that an important evolution appears.An acid was produced during this step and spread on the surface of the bubble.That is mostly oxalic-acid.Such a product is important as it initiates the second step.After a short time of aging, less than 2 h, the author identifies a new behavior.The discharge process is now a slower process (such as Townsend discharge process).Even if this evolution is characterized by a lower peak of energy, its duration is greater and the phenomenon can affect the entire surface of the bubble.The acid created in the previous step lay on the surface on the bubble.It may be observed as powder or whitish gel and has a conductive nature.The last step is the pitting stage.During this step, a higher rate of discharges is observed as well as a localized attack around the crystallized acids.All this leads to the formation of pits.Pits will induce, in the future, the electrical treeing [61].The transition from the first step and the second step may be interrupted.The absence of O 2 , CO, CO 2 and H 2 O should inhibit any oxidation reaction.One solution to avoid such intrusion into the insulation wall is in doing 1 or 2 additional VPI sessions.In doing so, the voids close to the surface are perfectly sealed. Mechanical Vibrations and Temperature Glass Transition One alternating current in a wire produces alternating magnetic fields.When two wires are close to each other, the Laplace forces appear between these two wires at twice the frequency of the alternating current.Hence, as electrical machines use alternating currents at 50 Hz (60 Hz), any wire is undergoing an alternating force à 100 Hz (120 Hz).Even if such an alternating force is low in amplitude, it exists during all the lifetime and has to be taken into account in the design (Figure 27).It immediately appears that such a mechanical stress acts on the insulation system.The material, which will undergo such mechanical stress, is the epoxy-resin.The most important parameter about the epoxy-resin is the temperature of glass transition (Tg).When the temperature of the epoxy-resin is lower than Tg, the epoxy-resin is considered as a hard material.If the temperature is greater than Tg, the epoxy-resin acts as a soft material, and more precisely as a soft material which can absorb vibration.Even if the epoxy resin becomes a soft material, it keeps its shape.In increasing the temperature, the material can reach a flabby behavior, which is not desired.Hence, it is not easy to choose the state the epoxy-resin will have for rated conditions.Epoxy-resin must be hard, soft or within an intermediate state?Fortunately, one experiment will give the answer [62].A test bench has used generator bars.The bars were submitted to low vibrations at 100 Hz and were also powered with a high voltage supply (32 kV).It appeared that the epoxy-resin within its hard state did not have a good behavior.The electrical breakdowns occur earlier when the temperature of the insulation system is lower than Tg. intrusion into the insulation wall is in doing 1 or 2 additional VPI sessions.In doing so, the voids close to the surface are perfectly sealed. Mechanical Vibrations and Temperature Glass Transition One alternating current in a wire produces alternating magnetic fields.When two wires are close to each other, the Laplace forces appear between these two wires at twice the frequency of the alternating current.Hence, as electrical machines use alternating currents at 50 Hz (60 Hz), any wire is undergoing an alternating force à 100 Hz (120 Hz).Even if such an alternating force is low in amplitude, it exists during all the lifetime and has to be taken into account in the design (Figure 27).It immediately appears that such a mechanical stress acts on the insulation system.The material, which will undergo such mechanical stress, is the epoxy-resin.The most important parameter about the epoxy-resin is the temperature of glass transition (Tg).When the temperature of the epoxy-resin is lower than Tg, the epoxy-resin is considered as a hard material.If the temperature is greater than Tg, the epoxy-resin acts as a soft material, and more precisely as a soft material which can absorb vibration.Even if the epoxy resin becomes a soft material, it keeps its shape.In increasing the temperature, the material can reach a flabby behavior, which is not desired.Hence, it is not easy to choose the state the epoxy-resin will have for rated conditions.Epoxy-resin must be hard, soft or within an intermediate state?Fortunately, one experiment will give the answer [62].A test bench has used generator bars.The bars were submitted to low vibrations at 100 Hz and were also powered with a high voltage supply (32 kV).It appeared that the epoxy-resin within its hard state did not have a good behavior.The electrical breakdowns occur earlier when the temperature of the insulation system is lower than Tg. Nanocomposites, An Answer to the Lifespan Extension? In the previous chapter, many ideas have been introduced to increase the lifespan of the insulation system.The main improvement must be done for the epoxy resin which remains the weakest part of the insulation wall.Authors introduce one application of nanocomposites in order to decrease the treeing phenomena between layers in insulation system [36].As mentioned, this innovation has not led to a usable insulation system and is still under evaluation.Their introduction into insulation system will have an impact in the evolution of acceptance criteria.This conclusion is Nanocomposites, An Answer to the Lifespan Extension? In the previous chapter, many ideas have been introduced to increase the lifespan of the insulation system.The main improvement must be done for the epoxy resin which remains the weakest part of the insulation wall.Authors introduce one application of nanocomposites in order to decrease the treeing phenomena between layers in insulation system [36].As mentioned, this innovation has not led to a usable insulation system and is still under evaluation.Their introduction into insulation system will have an impact in the evolution of acceptance criteria.This conclusion is provided by this recent review [63].Authors have thoroughly examined the polymers properties and their evolution when nanocomposites are included.In particular, the behavior of filled epoxy-resin has been under investigation.First, the electrical breakdown voltage may increase by using suitable nanocomposites instead of microcomposites.The compounds are also more resistant to partial discharges and reduce the treeing possibility.However, the dielectric strength of the epoxy resin may be deteriorated in including too high-permittivity fillers [64].Unfortunately, the water absorption is not reduced.Compounds filled with nanocomposites may have increased moisture uptake compared to the base polymer.That leads to a decrease in its resistivity and an alteration of the mechanical properties.The main issue is the low viscosity which must be maintained to achieve the filling of the insulation wall during VPI.Such an issue has not been intensively studied.Even if the nanocomposites are of interest for chemistry of polymers, their introduction into insulation system is not fully realized.A long period of observation will be required to introduce their use in machines with a long lifespan. Industrial Examples The first commercial nuclear power plant in the United Stateswas the Shippingport reactor.It was the first full-scale PWR nuclear power plant in the United States (2 December 1957).Westinghouse Electric Corporation was the first contracor.Such a power plant remained in operation until October 1982.As it was a prototype, no one knew how long it would last.Now, the situation is different, more than 400 nuclear power stations are in operating condition in the world.The first fully operational nuclear power plant has initiated a large market and large countries want one local manufacturer.The USA has mainly General Electric and Westinghouse as industrial compagnies, France has begun with Framatome (Now AREVA), Russia has Rosatom, Canadians have Atomic Energy of Canada Limited, etc. Westinghouse has built nuclear power stations and some of them have reached the forty years old with the same motors for the Reactor Coolant Pump (RCP).These motors use the "Thermalastic" TM proprietary insulation system.This invention was patented and several foreign manufacturers have purchase a license.That's why, the first nuclear power stations in France use this insulation system.Manufacturers that obtained a licence, also wanted to acquire such a know-how and have initiated studies about the improvement of the insulation system.As it was patented, they must make changes and must also provide great enhancement.During 1954, "Thermalastic" TM was introduce in french factories as insulation system for motors of nuclear power stations.In 1972, Jeumont-Schneider finished the development of his own insulation system named "Jislastic" TM .In 1982, "Jislastic" TM was qualified for operation in nuclear environment.Qualifications was performed on coils and on one of most critical nuclear motor (motor of RHR: reactor Residual Heat Removal pump) insulated with the "Jislastic" TM system.Once again, time to develop a new specific insulation is very long.In 1987, "Jislastic" TM was used in RCP Motor stators in operation in French power plants.Windings insulated with "Jislastic" TM system successful withstanded critical electrical and mechanical stresses due to 1500 DOL (Direct On Line) full load stops and starts performed on a RCP Motor manufactured with this insulation system (Tested RCP Motor parameters are: Nominal Power > 8500 kW, Synchonous Speed = 1500 rpm, Voltage = 6600 V, Frequency = 50 Hz with very big shat inertia that includes a flywheel.).This RCP motor is still in operation at date."Jislastic" TM system is used for manufacturing of AREVA RCP Motor stators.Reviews of machines in operation and applied improvements have permitted to increase the insulation design lifetime from 40 years to 60 years (RCP Motors of the new generation provided by AREVA are designed for 60 years operation). Conclusions The customers want machines with high end characteristics when the outage costs are dramatically high.Extended lifespan is not a benefit for the entire application.It is related to the severity of the application and the outage costs.Lifespan is thus coupled with operating conditions.Users should also take into account the fact that a long lifespan motor needs maintenance periods.The best example is the nuclear power plants.Industrial applications usually do not need such machines.But the evolution of manufacturing processes, such as their automation, will put this question in doubt.Any downtime in the future large manufacturing plants will have a cost that will not be negligible.The trend is now visible because industrial customers want machines capable of operating unusual conditions.In fact, unplanned maintenances can seriously affect the site operation with considerable financial and environmental impact.Hence, the market for industrial machines will evolve and will need the knowledge of manufacturers which provide specific machines.Their know-how will be able to solve issues that are introduced by customers.The insulation system failure is the main reason of outage.The design of insulation system is not the result of a calculation.As it has been underlined, the insulation system is a compound and it lifespan is defined by its weakest part.Raw material, such as mica and glass are inert material and will be present in insulation for the next century.Nevertheless, the improvement of insulation system is a long-term work where experiments and monitoring of machines in operation are required.As mentioned, standards are a help when measurements are done.Using rigorous methods, the results are not in doubt.An increase in quality can therefore be demonstrated.But the standards do not provide answers to all the questions.The combination of aging parameters is not within the scope of standards even if many of them recognize that aging depends on several parameters.Operating conditions are sometimes too specific to be incorporated into standards.The effects of mechanical stresses on the insulation wall are still in the field of research even though some revisions have integrated limited mechanical stresses.Without a standard, that becomes a task which binds the customer and the manufacturer.They must demonstrate with limited tests that the item will reach the lifespan under the specified conditions.An increase in price is often unavoidable.Even if the still wants to lower costs, it has to look at the selling price from a global view including the outages cost.In this publication, authors want to inform users, that researches on improvements are ongoing and that takes time to validate the real benefit of one insulation evolution.Designers who are responsible for insulation systems know that this part is a critical element.Authors, explaining, step by step the basic functions associated with each element of the insulation system, hope that they have demystified the understanding of each component.Doing so, they also hope that this could be a help in diagnosis.By recalling the aging causes of insulation system, known at this date, authors wish to contribute to their solving by the application, during designing and manufacturing, of the presented technological solutions.The application of recommendations joined to this article is one of the ways for manufacturers of conventional electrical machines with a limited market, to increase their machines range.By improving the motors insulation, they improve their competitiveness by supplying machines with a high quality windings' insulation. Figure 1 . Figure 1.One strand or multi-strand conductor contains turn insulation.Its aim is to insulate strand (s) and also sustain the voltage surge provided by lightning or induced by non-sinus inverters.When the cross-section is too large, the skin effect can appear and reduces the effective cross-section.The strand is therefore subdivided to mitigate such an effect and each wire is insulated (Strand insulation). Figure 1 . Figure 1.One strand or multi-strand conductor contains turn insulation.Its aim is to insulate strand (s) and also sustain the voltage surge provided by lightning or induced by non-sinus inverters.When the cross-section is too large, the skin effect can appear and reduces the effective cross-section.The strand is therefore subdivided to mitigate such an effect and each wire is insulated (Strand insulation). Figure 2 . Figure 2. The main insulation (in green in the figure) has to stand the voltage supply.Its thickness depends on the voltage level.This one is a designer choice which is linked to the strength of the electrical field in the main insulation.In case of a high voltage supply, a conductive layer is applied, namely Conductive Amor Tape (In black in the figure) in order to eliminate the surface partial discharges which have a great influence on the insulation system ageing. Figure 2 . Figure 2. The main insulation (in green in the figure) has to stand the voltage supply.Its thickness depends on the voltage level.This one is a designer choice which is linked to the strength of the electrical field in the main insulation.In case of a high voltage supply, a conductive layer is applied, namely Conductive Amor Tape (In black in the figure) in order to eliminate the surface partial discharges which have a great influence on the insulation system ageing. Figure 3 . Figure 3.The conductive layer is grounded (ground potential); the difference of potential between the conductive layer and the main insulation depends on the voltage supply.It creates at the end of the conductive layer one area with excessive electrical stress.Such electrical stresses can be mitigated by the use of a semi-conductive layer. Figure 4 . Figure 4.The Mica in its natural state.It looks like a piece of stone.It can be divided into flakes and can provide thin sheets. Figure 3 . Figure 3.The conductive layer is grounded (ground potential); the difference of potential between the conductive layer and the main insulation depends on the voltage supply.It creates at the end of the conductive layer one area with excessive electrical stress.Such electrical stresses can be mitigated by the use of a semi-conductive layer. Figure 3 . Figure 3.The conductive layer is grounded (ground potential); the difference of potential between the conductive layer and the main insulation depends on the voltage supply.It creates at the end of the conductive layer one area with excessive electrical stress.Such electrical stresses can be mitigated by the use of a semi-conductive layer. Figure 4 . Figure 4.The Mica in its natural state.It looks like a piece of stone.It can be divided into flakes and can provide thin sheets. Figure 4 . Figure 4.The Mica in its natural state.It looks like a piece of stone.It can be divided into flakes and can provide thin sheets. Figure 5 . Figure 5.The thermal endurance graph is the basic graph that is able to provide an assessment of the insulation lifespan.The graph is linked to Class F insulation.By limiting the operating temperature, i.e., 120 °C, lifespan reaches 200,000 h. Figure 5 . Figure 5.The thermal endurance graph is the basic graph that is able to provide an assessment of the insulation lifespan.The graph is linked to Class F insulation.By limiting the operating temperature, i.e., 120 • C, lifespan reaches 200,000 h. Figure 6 . Figure 6.At the end of the iron sheet stack, the thermal stresses encountered in certain designs is high enough to speed up the aging of the insulation.This affects the area which is in yellow in (a).As this area will also undergo electrical field stresses, the conductive layer is elongated by a short length after the end of the stack, (b), doing so, the thermally affected area is now outside the electrically stressed area which is red coloured (c).In addition, semi-conducting layer will be added. Figure 6 . Figure 6.At the end of the iron sheet stack, the thermal stresses encountered in certain designs is high enough to speed up the aging of the insulation.This affects the area which is in yellow in (a).As this area will also undergo electrical field stresses, the conductive layer is elongated by a short length after the end of the stack, (b), doing so, the thermally affected area is now outside the electrically stressed area which is red coloured (c).In addition, semi-conducting layer will be added. Figure 7 . Figure 7. D.G.E.B.A. has two epoxy groups which can react with hardener and can initiate molecular chains."n" is the repeat units and has an average value of n = 0.1 when there is no advancement in the polymerization.The aromatic group provides temperature improvement.Insulation needs such improvement and without advancement, a warm resin is liquid and can be processed for vacuum pressure impregnation (VPI). Figure 7 . Figure 7. D.G.E.B.A. has two epoxy groups which can react with hardener and can initiate molecular chains."n" is the repeat units and has an average value of n = 0.1 when there is no advancement in the polymerization.The aromatic group provides temperature improvement.Insulation needs such improvement and without advancement, a warm resin is liquid and can be processed for vacuum pressure impregnation (VPI). Figure 8 . Figure8.A hardener having only two active sites is not able to generate a solid material.A rigid material is linked to hardener having a number of active groups greater than two.Insulation wall must be as hard as possible. Figure 9 . Figure 9.The diamine can initiate four bonds; it can build a polymer network in a short time and can provide a highly crosslinked network. Figure 8 . Figure8.A hardener having only two active sites is not able to generate a solid material.A rigid material is linked to hardener having a number of active groups greater than two.Insulation wall must be as hard as possible. Figure 8 . Figure8.A hardener having only two active sites is not able to generate a solid material.A rigid material is linked to hardener having a number of active groups greater than two.Insulation wall must be as hard as possible. Figure 9 . Figure 9.The diamine can initiate four bonds; it can build a polymer network in a short time and can provide a highly crosslinked network. Figure 9 . Figure 9.The diamine can initiate four bonds; it can build a polymer network in a short time and can provide a highly crosslinked network. Figure 10 . Figure 10.Three situations can be found about the strand insulation.From the left to the right, no insulation, an enamel coating with or without an additional tape (fiberglass without mica). Figure 11 . Figure 11.In figure (a), the defect is related to the strand insulation but is localized in front of the main insulation wall.The defect can be a void or a local de-lamination.It acts as a resistance in parallel with a capacitor (b) and can ignite partial discharges. Figure 12 . Figure 12.The defect is between two strands.It can be a void or a local rip in the layer.It can act as a short circuit between strands. Figure 10 . Figure 10.Three situations can be found about the strand insulation.From the left to the right, no insulation, an enamel coating with or without an additional tape (fiberglass without mica). Figure 10 . Figure 10.Three situations can be found about the strand insulation.From the left to the right, no insulation, an enamel coating with or without an additional tape (fiberglass without mica). Figure 11 . Figure 11.In figure (a), the defect is related to the strand insulation but is localized in front of the main insulation wall.The defect can be a void or a local de-lamination.It acts as a resistance in parallel with a capacitor (b) and can ignite partial discharges. Figure 12 . Figure 12.The defect is between two strands.It can be a void or a local rip in the layer.It can act as a short circuit between strands. Figure 11 . Figure 11.In figure (a), the defect is related to the strand insulation but is localized in front of the main insulation wall.The defect can be a void or a local de-lamination.It acts as a resistance in parallel with a capacitor (b) and can ignite partial discharges. Figure 10 . Figure 10.Three situations can be found about the strand insulation.From the left to the right, no insulation, an enamel coating with or without an additional tape (fiberglass without mica). Figure 11 . Figure 11.In figure (a), the defect is related to the strand insulation but is localized in front of the main insulation wall.The defect can be a void or a local de-lamination.It acts as a resistance in parallel with a capacitor (b) and can ignite partial discharges. Figure 12 . Figure 12.The defect is between two strands.It can be a void or a local rip in the layer.It can act as a short circuit between strands. Figure 12 . Figure 12.The defect is between two strands.It can be a void or a local rip in the layer.It can act as a short circuit between strands. Figure 13 . Figure 13.During short transients, such as a voltage surges, the windings (a) are modelled as distributed elements: capacitors, inductors and resistors, (b).Insulation wall between the copper strand and the iron core is seen as distributed capacity (Cg), insulation between copper strands is also seen as distributed capacity (Cs) and even if distributed inductors are presented, they can be ignored in most cases. Figure 14 . Figure 14.The turns can experience high voltages in the areas which are near the power line.Here, the neutral of the windings is not grounded and in assuming  = 10, it immediately appears that the difference of potential between consecutive turns strongly evolves near the entrance point and can induce a breakdown between the first turns. Figure 13 . Figure 13.During short transients, such as a voltage surges, the windings (a) are modelled as distributed elements: capacitors, inductors and resistors, (b).Insulation wall between the copper strand and the iron core is seen as distributed capacity (Cg), insulation between copper strands is also seen as distributed capacity (Cs) and even if distributed inductors are presented, they can be ignored in most cases. Figure 13 . Figure 13.During short transients, such as a voltage surges, the windings (a) are modelled as distributed elements: capacitors, inductors and resistors, (b).Insulation wall between the copper strand and the iron core is seen as distributed capacity (Cg), insulation between copper strands is also seen as distributed capacity (Cs) and even if distributed inductors are presented, they can be ignored in most cases. Figure 14 . Figure14.The turns can experience high voltages in the areas which are near the power line.Here, the neutral of the windings is not grounded and in assuming  = 10, it immediately appears that the difference of potential between consecutive turns strongly evolves near the entrance point and can induce a breakdown between the first turns. Figure 14 . Figure14.The turns can experience high voltages in the areas which are near the power line.Here, the neutral of the windings is not grounded and in assuming α = 10, it immediately appears that the difference of potential between consecutive turns strongly evolves near the entrance point and can induce a breakdown between the first turns. Figure 15 . Figure 15.In this windings having 5 turns, one turn is short-circuited (In red) and it provided an undesirable standalone coil. Figure 16 . Figure 16.The manufacturing process can bring three types of defect: two at the borders of the main insulation and one localized in the main insulation. Figure 17 . Figure 17.When the mica-layers are wrapped around the coil, it remains a possible path for a treeing.Even if the path is elongated by the half-overlapped mica layers, this will lead to a breakdown of the insulation wall. Figure 16 . Figure 16.The manufacturing process can bring three types of defect: two at the borders of the main insulation and one localized in the main insulation. Figure 16 . Figure 16.The manufacturing process can bring three types of defect: two at the borders of the main insulation and one localized in the main insulation. Figure 17 . Figure 17.When the mica-layers are wrapped around the coil, it remains a possible path for a treeing.Even if the path is elongated by the half-overlapped mica layers, this will lead to a breakdown of the insulation wall. Figure 17 . Figure 17.When the mica-layers are wrapped around the coil, it remains a possible path for a treeing.Even if the path is elongated by the half-overlapped mica layers, this will lead to a breakdown of the insulation wall. Figure 18 . Figure 18.In the end-windings, the coil is bent to fit to the geometrical constraints due to the straight parts. Figure 18 . Figure 18.In the end-windings, the coil is bent to fit to the geometrical constraints due to the straight parts. Machines 2017, 5 , 7 23 of 34 Figure 19 . Figure 19.Corona discharges start from sharp edge.Even if the distance from surfaces is large enough, the electrical field magnitude increases locally (a).Outside of the slot, the coils are bent and the handmade process can initiate sharp edges.It is not unusual to discover insulation wall affected by corona discharges in the first section of the outside area (b).This effect appears essentially for coils which are connected to different phases. Figure 19 . Figure 19.Corona discharges start from sharp edge.Even if the distance from surfaces is large enough, the electrical field magnitude increases locally (a).Outside of the slot, the coils are bent and the handmade process can initiate sharp edges.It is not unusual to discover insulation wall affected by corona discharges in the first section of the outside area (b).This effect appears essentially for coils which are connected to different phases. Figure 19 . Figure 19.Corona discharges start from sharp edge.Even if the distance from surfaces is large enough, the electrical field magnitude increases locally (a).Outside of the slot, the coils are bent and the handmade process can initiate sharp edges.It is not unusual to discover insulation wall affected by corona discharges in the first section of the outside area (b).This effect appears essentially for coils which are connected to different phases. Machines 2017, 5 , 7 24 of 34 Figure 20 . Figure 20.One half of a coil (a) containing all the elements.In (b), the strands are powered by a high voltage supply and all the leakage currents can be measured.In particular, A1 and A3 are leakage currents induced by the stress-grading.There are extracted by the guard electrodes and do not disturbed the measurement A2 which is the leakage current of the main insulation wall.Without the guard electrodes, the current measured by A2, will include A1 and A3. Figure 20 . Figure 20.One half of a coil (a) containing all the elements.In (b), the strands are powered by a high voltage supply and all the leakage currents can be measured.In particular, A 1 and A 3 are leakage currents induced by the stress-grading.There are extracted by the guard electrodes and do not disturbed the measurement A 2 which is the leakage current of the main insulation wall.Without the guard electrodes, the current measured by A 2 , will include A 1 and A 3 . Figure 21 . Figure 21.In both cases: (a) and (b); one strand surrounded by a main insulation wall exits from a stator core.A stress-grading layer in added in (b). Figure 22 . Figure 22.Simulations associated to case (a) and (b) show that electrical field at the end of the Conductive Armor Tape can be mitigated.Its value can be similar to the value encountered in the insulating layer which is in the slot (1.0 × 10 6 V/m = 1 kV/mm).Therefore, the electrical ageing in this sensible area is similar to the electrical ageing in the slot (case (b)). Figure 21 . Figure 21.In both cases: (a) and (b); one strand surrounded by a main insulation wall exits from a stator core.A stress-grading layer in added in (b). Figure 21 . Figure 21.In both cases: (a) and (b); one strand surrounded by a main insulation wall exits from a stator core.A stress-grading layer in added in (b). Figure 22 . Figure 22.Simulations associated to case (a) and (b) show that electrical field at the end of the Conductive Armor Tape can be mitigated.Its value can be similar to the value encountered in the insulating layer which is in the slot (1.0 × 10 6 V/m = 1 kV/mm).Therefore, the electrical ageing in this sensible area is similar to the electrical ageing in the slot (case (b)). Figure 22 . Figure 22.Simulations associated to case (a) and (b) show that electrical field at the end of the Conductive Armor Tape can be mitigated.Its value can be similar to the value encountered in the insulating layer which is in the slot (1.0 × 10 6 V/m = 1 kV/mm).Therefore, the electrical ageing in this sensible area is similar to the electrical ageing in the slot (case (b)).Machines 2017, 5, 7 26 of 34 Figure 23 . Figure 23.Two cases, one with stress-grading and one without stress-grading are simulated.The tangential electrical field (Et) along the main wall insulation is plot.With stress-grading, the tangential electrical field is under 3.10 5 V/m (300 V/mm).Without stress-grading the tangential electrical field can reach 4.10 6 V/m (4000 V/mm). Figure 23 . Figure 23.Two cases, one with stress-grading and one without stress-grading are simulated.The tangential electrical field (E t ) along the main wall insulation is plot.With stress-grading, the tangential electrical field is under 3.10 5 V/m (300 V/mm).Without stress-grading the tangential electrical field can reach 4.10 6 V/m (4000 V/mm). Figure 24 . Figure24.A model of an insulation layer containing three voids.Even if the electrical field in the insulation material is under 2 kV/mm, the electrical field in one void is over 3.0 kV/mm and electrical discharge may appear in such a void if it is fill with air (breakdown voltage of the air is 3.0 kV/mm). Figure 25 . Figure 25.One case of partial discharge.One bubble is present in the insulation wall and modifies its behavior.The new capacitor (Cv) has a lower breakdown voltage.When applied voltage induces a too high electrical field in the bubble, discharges occur and avalanche current increase the insulation leakage current. Figure 24 . Figure24.A model of an insulation layer containing three voids.Even if the electrical field in the insulation material is under 2 kV/mm, the electrical field in one void is over 3.0 kV/mm and electrical discharge may appear in such a void if it is fill with air (breakdown voltage of the air is 3.0 kV/mm). Figure 24 . Figure24.A model of an insulation layer containing three voids.Even if the electrical field in the insulation material is under 2 kV/mm, the electrical field in one void is over 3.0 kV/mm and electrical discharge may appear in such a void if it is fill with air (breakdown voltage of the air is 3.0 kV/mm). Figure 25 . Figure 25.One case of partial discharge.One bubble is present in the insulation wall and modifies its behavior.The new capacitor (Cv) has a lower breakdown voltage.When applied voltage induces a too high electrical field in the bubble, discharges occur and avalanche current increase the insulation leakage current. Figure 25 . Figure 25.One case of partial discharge.One bubble is present in the insulation wall and modifies its behavior.The new capacitor (Cv) has a lower breakdown voltage.When applied voltage induces a too high electrical field in the bubble, discharges occur and avalanche current increase the insulation leakage current. Figure 26 . Figure 26.Measured tan() versus voltages shows in red a trend of curve out of the IEC 60034-27-3 acceptance criterion and in green, the trend of curve in accordance with the IEC 60034-27-3 acceptance criterion.In individual coil, according to IEC 60034-27-3, tan delta tip-up that is the difference between two predefined voltage steps of 0,6 Un and 0,2 Un (tan()0,6 -tan()0,2) shouldn't exceed 0.005.In complete motor phase, there is not recommendation given by IEC standard, according to IEEE 286, usually Tip-Up is the difference between two predefined voltage steps of 0,25 Vn and Vn, with Vn the phase to ground voltage.When an excess of partial discharges is encountered at the highest voltage, the equivalent current Ir increases and tan() also increases.Such an issue is detected by the Tip-Up criterion. Figure 26 . Figure 26.Measured tan(δ) versus voltages shows in red a trend of curve out of the IEC 60034-27-3 acceptance criterion and in green, the trend of curve in accordance with the IEC 60034-27-3 acceptance criterion.In individual coil, according to IEC 60034-27-3, tan delta tip-up that is the difference between two predefined voltage steps of 0.6 Un and 0.2 Un (tan(δ) 0.6 -tan(δ) 0.2 ) shouldn't exceed 0.005.In complete motor phase, there is not recommendation given by IEC standard, according to IEEE 286, usually Tip-Up is the difference between two predefined voltage steps of 0.25 Vn and Vn, with Vn the phase to ground voltage.When an excess of partial discharges is encountered at the highest voltage, the equivalent current Ir increases and tan(δ) also increases.Such an issue is detected by the Tip-Up criterion. Figure 27 . Figure 27.In this figure, a slot contains eight strands.They are divided into two groups.The four upper strands carry a current oriented +, the four other strands carry a current oriented in the opposite (-).In each strand, two types of forces exist.The first one, in green, is the force induced by the strands of the same group (It gathers the strands).In blue, the force induced by the strands of one group to the strands of the other group (The groups repulse themselves). Figure 27 . Figure 27.In this figure, a slot contains eight strands.They are divided into two groups.The four upper strands carry a current oriented +, the four other strands carry a current oriented in the opposite (-).In each strand, two types of forces exist.The first one, in green, is the force induced by the strands of the same group (It gathers the strands).In blue, the force induced by the strands of one group to the strands of the other group (The groups repulse themselves). Table 2 . Insulation systems are divided into thermal classes having a maximum operating temperature (IEC-60085). Table 3 . The test coils are separated into groups which will be submitted to several aging temperatures for a specified time (IEC-60216). Table 3 . The test coils are separated into groups which will be submitted to several aging temperatures for a specified time (IEC-60216). Table 4 . List of tests or measurements that can be done in acceptance process of an electrical machine concerning the insulation system. Table 5 . List of standards that can be used for the qualification of an insulation system.
38,015.6
2017-02-20T00:00:00.000
[ "Physics" ]
The Chemistry Development Kit (CDK): An Open-Source Java Library for Chemo-and Bioinformatics The Chemistry Development Kit (CDK) is a freely available open-source Java library for Structural Chemo-and Bioinformatics. Its architecture and capabilities as well as the development as an open-source project by a team of international collaborators from academic and industrial institutions is described. The CDK provides methods for many common tasks in molecular informatics, including 2D and 3D rendering of chemical structures, I/O routines, SMILES parsing and generation, ring searches, isomorphism checking, structure diagram generation, etc. Application scenarios as well as access information for interested users and potential contributors are given. INTRODUCTION Whoever pursues the endeavor of creating a larger software package in chemoinformatics or computational chemistry from scratch will soon be confronted with the Syssiphus task of implementing the standard repertoire of chemoinformatical algorithms and components invented during the last 20 or 30 years. The obvious workaround for this problem are commercially available chemoinformatics libraries that have been developed by companies such as MDL Information Systems, Inc., Daylight Chemical Information Systems, Inc., Advanced Chemistry Development, and certainly many others. A scientist in an academic environment, however, often feels obliged to openly share his results with the scientific community. Using proprietary components for software development makes it impossible to do so. Generally, scientific software is too often closed source, leaving the user with a black box performing magical operations. Perceived as being counterproductive for the overall scientific progress, this trend fortunately seems to change. Sharing of ideas and results within communities is probably the most central paradigm in science. By publishing his results a scientist allows his colleagues to verify and build upon his results, thereby advancing the particular field as a whole [If I have seen further it is by standing on the shoulders of giants. -Isaac Newton]. One of the motivations for such contributions, besides the pure scientific curiosity, is, of course, the gain of social recognition and reputation among his peers. In recent years the ideas sketched above have been part of the open-source revolution that took place in the world of software development, most widely recognized through the great success of the free Unix-like operating system GNU/Linux, a collaborative work of many individuals and organizations, including the Free Software Foundation lead by Richard Stallman and the Finish computer science student Linus Torvalds who started the project. According to several essays on this subject, open-source software, for which, by definition, the source code is always freely available to the public, 1 has a number of intriguing benefits. Most importantly, if the community of users is large enough and everyone can look at the sources and change them, it should not take too long until a particular software error is found and fixed. "Given enough eyeballs, all bugs are shallow", as Eric Raymond put it in his widely recognized essay "The Cathedral and the Bazaar", 2 in which he analyses the mechanisms and principles of the open source movement. Further, other scientists can easily build on existing results. Credit can still be given in the appropriate form, because open-source software is by no means freeware or in the public domain. Quite the contrary, the package as a whole as well as each piece of source code is labeled with a clear copyright notice, stating the name of the copyright holder and the nature of the license. This copyright notice must not be removed. Additional comments, however, regarding the changes and improvements made by others can, of course, be added. Substantial improvements to an existing piece of code by someone other than the copyright holder will usually lead to something like team formation, including appropriate copyright changes. This is especially important for academic scientists, who need to be able to point out their contributions to a particular field. Considering the virtues of open-source software on one hand and the scientific tradition on the other hand, we started the CDK project under terms of a liberal open-source license. 3 We use SourceForge, 4 a Web based open-source development platform, for coordinating the contributions from about 10 developers from about five different countries. A greater number of people have subscribed to the developers mailing list and either listen silently or contribute by making feature requests or critical comments . SourceForge provides all the tools which are generally considered to be indispensable components for coordinating the contributions from developers and users in larger software projects, as there are Webspace, mailing lists, bug trackers, software versioning systems, release managers, etc. This article is not only to describe the CDK project in scientific and software-technological terms but also to promote the underlying development model. The authors think that these principles form a paradigm for scientific software development where scientists can truly exploit the benefits of the Internet for a distributed collaboration that would not have been possible in pre-Internet times. We are explicitly not claiming to give a general overview of chemical open source software. This will form an article of its own. However, we will give a synopsis on open source Java software in the following section instead. The interested reader is cordially invited to visit the CDK project pages at http://cdk.sourceforge.net, get in touch with the developers, make use of the CDK package, and ultimately to extend its functionality. OPEN SOURCE JAVA SOFTWARE IN CHEMISTRY A number of libraries written in Java are freely available in binary form, but they do not include access to use and extend the source code. [5][6][7] Libraries for other computer languages have been described in the literature but are, to our knowledge, not available to the public. 8 To give an overview of the open source activitities in chemistry, we analyzed the open source projects registered at SourceForge. 4 This Website has about 40 projects registered in the field of molecular chemistry, as found with a search on keywords such as molecule, molecular, chemistry, and chemical. Many projects are inactive: some are only registered but show no activity at all, and some showed activity in the past but never released software in binary form or source code. The number of active projects is about 25-30. Of these projects 14 were found that use the Java programming language. Three of these are inactive for a long period and do not provide downloads. Two are succeeded by this project, 9,10 and four are based on CDK. [11][12][13][14][15] Four projects are interesting to note: MolMaster having a BSD license 16 and including visualization of isosurfaces, jVisualizer having the GPL license 17 for analyzing NMR couplings, CML having an Artistic License 18 with tools around the Chemical Markup Language, 19 and JOELib having the GPL license 20 with an extensive file IO library based on OpenBabel 21 and a library for molecular descriptors. Note that the first two are not really libraries but applications instead. CMLDOM and JOELib, however, are libraries with similar functionality for storing chemical content in memory. THE ORIGIN OF THE CDK The CDK originated as a support project for a couple of different chemoinformatics software packages, namely a structure editor, 11 a Web database for organic compounds and their NMR chemical shifts, 14 a program for computer assisted structure elucidation, 22 and a 3D structure viewer and analyzer, 13 which is still being ported to the CDK. The authors of these programs generally agree on the benefits of the programming language Java, as there are as follows: clear object-oriented design, platform-independency, and the fact that it has become an important standard for client-and server-side applications on the Web. Since most of the scientifically interesting applications in chemistry have a computationally demanding kernel, they benefit from a client/server architecture because the server part can then be run on a powerful machine, while a user-friendly (Web-) interface can be used on whatever client machine the user chooses. These demands can be met much easier if one can still resort to a single programming language for the implementation and so we consider Java to be the programming language of choice not only for chemoinformatics and computational chemistry but also for scientific applications in general. Concerns are frequently raised with respect to the performance of Java. However, the language structure itself, compared for example with C++, provides no good reason for Java having a generally lower performance than other languages more frequently used in high performance computing. Indeed, great efforts have been made to increase Java runtime performance and so, today, given a proper implementation and using the right runtime environment, serverside Java code does not need to be slower than C++ with the same scope. We would like to point the reader's attention to a whole issue of the IBM systems journal dedicated to the subject of high performance computing in Java. 23 DEVELOPMENT MODEL To participate in CDK development, the interested individual needs to register with SourceForge (SF) to receive a free SF account and subscribe to the developers mailing list<EMAIL_ADDRESS>He or she then contacts one of the project administrators, who then adds the new member to the project's developers list. Besides good Java programming skills, a working knowledge for the Concurrent Versions System (CVS) is needed. CVS is the most widely used system for version management in the Open Source community, which greatly facilitates the coordination of multiple developers working on the same source tree. It is quite common in computer science to write a requirements specification before coding is started. Such a specification describes the intended behavior of the software (classes in this case) and can be used by developers to check the implementation and by users to see how those classes can be used. When the CDK was designed, such specification was only partly made using Unified Modeling Language (UML) diagrams. 24 Currently we use Requests For Comment (RFC) documents for proposing a new specification to which the CDK library must conform. These RFC, which are a long time Internet standard for decision making, are discussed on the developers mailing list after which they are marked as final after majority voting. PROJECT CONVENTIONS In Java, source code is organized in so-called packages, which often (but not neccessarily) follow a naming scheme of something like an inverted Internet address. Putting a class such as Atom into a uniquely named package prevents class name collisions in cases where another library, used together with the CDK, also contains an Atom class with different function. Since the CDK is part of the OpenScience project, 25 the CDK source tree is organized in packages under the org.openscience.cdk root package. Frequently, a new developer is interested in adding a particular functionality to the CDK, for example the capability for isomorphism and automorphism checking. He discusses the implications of his endeavor with the others CDK developers on the mailing list. Taking into account the suggestions, caveats, etc., of his codevelopers, he would then create a new subpackage org.openscience.cdk.isomorphism and add his contribution under this part of the source tree. An important part of the CDK development effort is Unit Testing, which is based on the idea of writing easily repeatable tests for smallest units of the software package in question. Whenever a programmer adds a new module with new functionality to the CDK source tree, he is expected to add a test to the org.openscience.cdk.tests package, adhering to a particular naming convention. The unit testing itself is based on the JUnit package, 26 which makes it easy to run a fully unattended test for the whole CDK package. This has proven to be of great value for such a distributed programming effort like the CDK. Especially if a developer changes something within the CDK core classes, a full JUnit test run of the CDK tests will show him within a few seconds whether his changes broke something or not. Further, each of these little test snippets is an instructive example on how to use a particular CDK module. Indispensable for a library is documentation. The CDK is documented using the JavaDoc systemsan integral part of the Java programming language. Using special tags, the code is documented directly in the source code, from which documentation can be produced automatically in various formats, most importantly as Web pages. We are using source code metrics to constantly measure the amount of documented source code statements, and we try to keep this percentage as high as possible. In addition to the JavaDoc API documentation, the user is guided by a few introductory manuals. It should also be mentioned that the CDK's software architecture has been independently chosen as subject of an M.Sc. thesis at the Technion (Israel Institute of Technol-ogy), 27 focusing on automated methods for code inspection and review. This is a common industrial process by which source code is usually read manually to find errors, potential improvements, dependencies, etc. The thesis focuses on automizing the formal concept analysis using concept lattices 28 for the review of individual java classes. Concept analysis is a mathematical classification technique, which is used for different problems in software research. This methodology is applied in three stages: (1) understanding the public interface of the class for use as a black box, (2) trying to reason about the design and possible errors in the class based on its lattice, and (3) inspecting actual source code. The first two stages are done without even having the source code: the methods and fields are determined by reverse engineering of the compiled class files. We have already received valuable input from this related project which will help us to resolve design flaws in our library. DESCRIPTION OF THE LIBRARY'S FUNCTIONALITY 6.1. The Core Classes. The classes contained in the root section of the CDK's package hierarchy are all formalized representations of basic chemical concepts such as atoms, bonds, molecules, etc. Figure 1 shows an UML diagram explaining the inheritance hierarchy and the dependencies between the fundamental classes of the CDK. The UML diagrams shown in this article depict the relationship of only the core classes. They are thus edited and do only show a subset of their true interclass relationships. They show the central role of the ChemObject class, which is the superclass of all other classes and provides methods for storing even complex properties for any derived CDK object. The first and probably most obvious inheritance chain to be mentioned in the core classes it that of Atom extending AtomType extending Isotope extending Element. This is not only logical from a chemical point of view but also provides the basis for a simple mechanism for the creation of Atoms, AtomTypes, Isotopes, and Elements based on subclasses of a single IsotopeFactory tool class, which will be discussed below. Placing the Atom in a long chain of inheritance provides central access points to the different levels of information. While the Element, for example, provides access to the symbol or the atomic number, some AtomType can further distinguish between the state of hybridization of an Atom or some other distinction a force field might need. A further level of abstraction is incorporated by the AtomContainer and the ElectronContainer. The Electron-Container forms the base for constructs such as Bonds and Orbitals, whereas the AtomContainer is the envisioned storage for Atoms together with their Bonds and is the superclass for Rings, Molecules, and Substructures. To support higher level concepts such as molecular ensembles or reactions, the CDK core is complemented by classes which group molecules into higher order constructs, like SetOfMolecules, ChemSequence, ChemModel, and ChemFile. For clarity, the relationship of ChemObject and the AtomContainer has been moved to an additional UML diagram shown in Figure 2. It shows how Molecules are contained in a SetOfMolecules, which is part of a ChemModel. ChemModels are meant to store the molecular information of the state of a chemical systems at a given point in time. To allow for the modeling of changes in time, we introduced the possibility of arranging various ChemModels into a ChemSequence. The ChemFile class is designed as the top level container, which can contain all the concepts stored in a chemical document among which one or more ChemSequences. The Polymer class extends Molecule and provides convenient access to the Monomers it consists of. The Monomer itself is implemented as an AtomContainer. A subclass of Polymer is the BioPolymer used for representing protein and DNA molecules. The Polymer design allows BioPolymers to treat each amino acid as an AtomContainer. 2D Structure Graphical Handling. The ability to display and manipulate 2D drawings of chemical structures is one of the most important features of any chemoinformatics-related program. This includes the capability of generating coordinates for those chemical structures which have for example been generated by structure generator as coordinateless, chemical graphs. The details for this latter step are discussed in Section 6.4. The Model-View-Controller paradigm (see for example ref 29) is used in the CDK library design wherever applicable. The classes for 2D structure graphical handling, for example, work on top of a ChemModel whose content they display and manipulate. A Renderer2D class produces a 2D drawing comparable to those produced by the major commercially available products. This view can be customized by altering the standard settings of a Renderer2DModel object. If the pure display is to be complemented by an option to manipulate the drawing, a Controler2D can be added to the setup. Its settings, again, are determined by a Controler2DModel and can be altered, for example, by using setDrawNumbers(true) in order to display atom numbers annotated to the structure. The Controler2D is an adapter to the available input devices, typically mouse and keyboard, and translates input into changes to the underlying models, which again are reflected by changes in the view produced by the Renderer2D. A simple resulting application is shown in Figure 3. 6.3. 3D Structure Handling. To provide high performance 3D graphics, the Java3D API is used within the CDK. This, however, makes CDK-based 3D applications no longer platform independent. This dependency originates from Java3D API relying on OpenGL or DirectX for the sake of In regard to loosing the platform independency the CDK does also contain classes for 3D rendering which are not based upon the Java3D API. Together with the separation of the rendering classes, due to the Model-View-Controller paradigm, this leads to the following four fundamental classes for 3D rendering: Renderer3D, Renderer3DModel, Accel-eratedRenderer3D, and AcceleratedRenderer3DModel, the latter two based upon Java3D. 6.4. Structure Diagram Layout. Key fields of chemoinformatics, like virtual combinatorial chemistry, virtual screening, or computer-assisted structure elucidation, frequently handle chemical structures as one-dimensional graphs. These graphs are, for example, products of structure generators which use graph theoretical techniques to exhaustively and irredundantly generate all constistitutional isomers which are in agreement with a given molecular formula. In any of these programs, however, comes the point where, after a selection during a virtual screening, for example, the successful candidate structure(s) needs(s) to be presented to a chemist. At this point, a tool is needed that generates 2D or 3D coordinates to produce the kind of depiction a chemist is used to. This process has been termed Structure Diagram Generation. 31 While 3D model builders such as CORINA 32 are on our wishlist for the future and have not yet been implemented, the CDK features a 2D structure diagram generator, which has been written from scratch and which can easily be seen as one of the finest and most useful parts of the CDK, since most of its applications require structure diagram generation at several stages. 6.5. Graph Invariants. This package contains a few classes for the computation of graph invariants such as Wiener Indices, 33 Morgan's extended connectivity (EC) indices, 34 and others. 35 Morgan's EC indices are, for example, used for canonical labeling of compounds. This package is likely to be one of the hot spot for future developments, since many chemoinformatics applications, like (quantitative) structure activity relationship ((Q)SAR) computations, do often rely on calculating various combinations of graph invariants of different types. 6.6. Structure Generators. This package holds some simple structure generators which are used by the SENECA system for computer-assisted structure elucidation. 22 The class SingleRandomStructureGenerator can be used to generate a totally random structure from the constitutional space given by a certain molecular formula. Based on this randomly generated structure one can then use RandomGenerator to make small, random moves in constitution space, based on an algorithm suggested by Faulon. 36 If such a generator is combined with a target function and simulated annealing protocol, one can effectively search constitution space for structures with certain desired properties, provided that these properties can be reliably backcalculated from a given constitutional formula. To be able to build a structure generator for chemical graphs based on evolutionary algorithms (like the well-known genetic algorithm), we also included a CrossOverMachine, which accepts two chemical graphs in the form of Atom-Containers and produces two offsprings. Genetic Algorithms are population based methods which produce new offsprings for the next generation by a carefully chosen combination of mutation and crossover procedures, applied to the current population. The CrossOverMachine does thus complement the mutation operation used in the RandomGenerator class. 6.7. Ring Searches. John Figueras' fast algorithm for finding the Smallest Set of Smallest Rings (SSSR) has been implemented and is used for example by the structure diagram generation package. 37 Especially large condensed ring systems, for which the process of coordinate generation could take up to a minute due to a slow depth first ring perception algorithm in older systems, 38 can now be layed out within fractions of a second as shown in Figure 4. Further this package contains a class for partioning a given ring systems into AtomContainers, one for each ring. In other applications, like aromaticity detection, for example, it is essential to compute the Set of All Rings (SAR). While procedures have been published to produce the SAR from a SSSR, it is computationally more efficient THE CHEMISTRY DEVELOPMENT KIT J. Chem. Inf. Comput. Sci., Vol. 43, No. 2, 2003 497 to use specialized algorithms for this purpose. The CDK contains an implementation of a fast and efficient algorithm given by Hanser et al. 39 6.8. Aromaticity Detection. There are various definitions of aromaticity and at least as many ways of detecting aromaticity according to these definitions. This package is the intended container for all of them and does currently hold an implementation of a HueckelAromaticityDetector class. Based on the SAR detection algorithm by Hanser et al. (see section 6.7) this class starts with the largest detected ring, counts the number of alternating double or triple bond electrons, and does also take into account free electron pairs of heteroatoms. It then checks whether the ring contains 4n + 2 π-electrons, according to the well-known Hückel rule. The ring, all its atoms, and bonds are marked as aromatic, and the search continues with the remaining rings of equal or smaller size, leaving out those rings that are completely part of an already detected larger aromatic system. 6.9. Isomorphism. Being able to determine if two chemical structures are identical or whether one structure is a subgraph of another structure is one of the most important capabilities of a chemoinformatics library. The Isomorphism subpackage contains a versatile module for Maximum Common Substructure (MCSS) Searches. Since MCSS determination is the most general case of graph matching, it can be used to determine structure identity and to do subgraph matching and maximum common substructure searches. 6.10. File Input/Output. File input and output is generalized in CDK. All file i/o classes implement either ChemOb-jectReader or ChemObjectWriter. Each file format is represented by two separate classes implementing one of these interfaces. CDK currently supports IO classes for XYZ, MDL molfile, 40 PDB, 41 and CML. 42 The latter format was developed by Murray-Rust and Rzepa as the first XML based file format for chemical content. The CDK contains both an input and output class for this format. The CML input reader uses an alternative to Murray-Rust's DOM approach and is based on SAX. 43 6.11. Interaction with other Java Libraries. Besides file i/o, CDK supports a second method to exchange data with other programs and libraries. The interface to other libraries makes it possible to combine methods from both libraries giving access to a larger set of functionality. CDK provides direct conversion of CDK classes to JOELib 20 classes. Support for CMLDOM 19 is planned. 6.12. SMILES. Simplified Molecular Line Entry Specification (SMILES) provides string representations of molecular constitutions. 44 Due to their compactness and relative simplicity they are now widely used as an interchange format for coordinateless molecular structures. Based on a specification for unique (canonical) SMILES, 45 it is also possible to perform graph isomorphism checks. The CDK features a generator for canonical SMILES, written to comply with the rules published by the Daylight Inc. founders. While the SMILES generator implements all of the published SMILES standard including chirality, the SMILES parser in the CDK package only complies to the (slightly extended) Super Simplified SMILES specification 46 which is sufficient to code most organic structures. Fingerprints. Fingerprinting is nowadays an indispensable tool for judging molecular similarity, as a prefilter for isomorphism checking and thus for structure searching in databases. Here as well as in the case of SMILES an own subpackage for this class of algorithms is justified because there are various ways of computing fingerprints. By allowing the addition of different fingerprinters instead of just having one monolithic org.openscience.cdk.tools.Fingerprinter we give the user the freedom of choosing whatever methods yields the best performance for his case. The Fingerprinter class in the CDK produces Daylight-type fingerprints. 47 It works by running a breadth-first search, starting at each atom in the molecule, thereby producing string representations of paths up to the length of six atoms. For each of theses SMILES-like strings, hash codes are computed, using the standard string hashing algorithm provided by the Java language. With these hash codes, a pseudorandom number generator with a default working range of [0-1023] is seeded and the first random number is retrieved. This number indicates a position in a fingerprint bitstring of length 1024, which is then set to "1". Based on the entirety of all computed paths from the molecule, a molecular fingerprint is obtained in the form of this bitstring. 6.14. Tools. The tools package contains utility classes for all those cases that did not justify the creation of a dedicated package. The IsotopeFactory, for example, can return preconfigured instances of Elements and Isotopes for a given element symbol or a given atomic mass. The ConnectivityChecker class tests whether a given chemical graph is connected, i.e., whether there is a bond path between every possible pair of atoms in the graph and, in the case of a nonconnected graph, it can return a Vector with the disjunct pieces of the graph, stored in AtomContainer objects. Related to ConnectivityChecker is the Path-Tools class which, for example, provides methods for finding the shortest path between to given atoms in a molecule. The MFAnalyser class has methods of returning the molecular formula of a given Molecule object and for creating an unbonded AtomContainer object from a given molecular formula string. The HOSECodeGenerator produces HOSE codes 48 for each atom in a given AtomContainer. By feeding these HOSE codes into the BremserOneSphere-HOSECodePredictor class, one can predict expectation ranges for carbon-13 NMR chemical shifts. 49 RESULTS The CDK is now the basis for a number of software projects. The chemical editor JChemPaint 11 which takes advantage of the CDK and for which the CDK's Model-View-Controller mechanisms have been implemented is again just a support tool for higher level applications such as the Web database NMRShiftDB for organic compounds and their NMR chemical shifts, or SENECA, a program for computer assisted structure elucidation. 22 While allowing the fast assembly of such large monolithic applications such as SENECA or NMRShiftDB, the true strength of the CDK lies in its ability to serve as a chemoinformatician's workbench. By just writing a few lines of code, one can quickly test new ideas or modify existing CDK based applications to make them suit other needs. The following code snippet illustrates how one can quickly parse a list of SMILES strings into AtomContainers, produce 2D coordinates, and display the results in a MoleculeList-Viewer. CONCLUSION We have presented details of a new open-source Java library facilitating the implementation of software packages in chemoinformatics. The CDK is freely available 50 under the terms of the GNU Lesser General Public License (LGPL) 3 . The source code may thus be downloaded and improved or adapted for specific needs. In contrast to the famous GNU General Public License (GPL) 51 the LGPL allows for the use of the CDK in proprietary software packages. While any use of the CDK for proprietary and closed-source project is thus welcome, we also highly appreciate feedback and any potential backflow. Companies are using the CDK for commercial projects, such as SafeBase, a theragenomics knowledge management system on adverse drug reactions. 52 At the IBM Germany Development Lab in Böblingen an Extreme Blue internship project group has been started to write a CDK-based open source 2D/3D editor for chemical structures. The company IXELIS, situated in Strasbourg, France, is working on a global semantic information system applied to scientific knowledge and has contributed the MCSS code, which came into existence during their work with the CDK. Further, our chemoinformatics software kit is the basis for other open-source projects, like the SENECA system for computer-assisted structure elucidation 22 and NMRShiftDB, 14 a free database of organic chemicals and their NMR data. Besides its proven usability in research and production quality scientific software, the CDK has also become a valuable tool for teaching chemoinformatics. At least one of our authors (C.S.) is using the software package in lectures to demonstrate many standard chemoinformatics algorithms on the functionality level as well as on the source code level. Due to the inherent modularization of the object oriented language Java, most of the classes and methods are concise and easy to understand. It should be mentioned that we have experienced, albeit on a smaller scale than the large open-source projects, the benefits and the fascination of the principles mentioned in the Introduction. Based on this experience, this article is also supposed to promote these ideas and to attract further contributors for our project. The inspiring experience is that as soon as a certain amount of material has accumulated and a certain amount of publicity has been gained, an open-source project becomes something like a self-runner, contributors start adding their own subprojects, and new ideas are integrated which would probably never have been borne in mind if the CDK were created by a single organization and even individual. Of course, such a development model also has disadvantages. It is probably much more difficult to adhere to certain quality standards, to respond to deadlines (but on the other hand, there rarely are any in such small projects), and to do strategic planning. It has been shown, however, that these problems can be overcome.
7,221.2
2003-02-11T00:00:00.000
[ "Computer Science", "Chemistry" ]
CONSIDERING THE CURRENT CHALLENGES AND RISKS IN THE SUSTAINABLE LAND USE FOR MINING TERRITORIES The relevance of this work is conditioned by the growing challenges and risks arising in the mining areas, and the need to counteract them. The purpose of the work is to develop methodology of a sustainable land use under the conditions of modern changes in the environment under the influence of anthropogenic stress. The authors propose to interpret the concept of “sustainable land use” as a long-term, multi-purpose and cost-effective relationship between society and land resources. Results. The issues of methodology of sustainable land use in industrial regions are considered. The levels of sustainable land use management within the framework of the concept of biotic regulation of the environment are substantiated. The features of management on each of them are revealed, and the scientific and technical principles of sustainable land use are formulated. The strategic priorities and indicators of sustainable land use are defined. Methodological approaches to ecological and economic assessment of land resources are formulated both by components and by integrated assessment. The widespread, long-term changes of land resources and transformation of ecosystems are taken into account. The parameters according to which the “corridors” of acceptable land use are determined, including environmental parameters. The level of natural ecosystems conservation, the balance of natural and anthropogenic energy flows, the degree of extraction of natural resources, as well as social parameters are among them. The procedure of coordinating individual interests and social preferences on the basis of search of optimum effective options of sustainable land use. It is recommended to perform a multi-criteria optimization of sustainable land use by means of the lexicographical method in relatively simple situations. In more complex cases this can be attained by the method of successive concession. The options of the discount rate and the discount factor depending on the value of the discount period (according to the model of complex processes) are proposed. Applying the results. The implementation of the developed methodological provisions allows to provide conditions for sustainable land use, counteracting risks associated with environmental challenges arising in the mining areas. I ntroduction The world community has come (Rio 92; Johannesburg, 2002; Rio+20) to understand the importance of correcting the development of society in relations with the natural environment.the need to develop the principles of economic activity taking into account the emerging and emerging challenges and risks has also been realized [1][2][3].Nowadays, the most obvious challenges and risks are environmental threats (without reducing the importance of social ones) [4].They are implemented in the form of various negative consequences.First, they are applicable to the natural environment, and then they are also topical for various sectors of the economy.Such sectors include land use, subsoil use, and forest management.Environmental threats are negative consequences caused by natural factors and those determined mainly by the characteristics of global climate change and anthropogenic (including technogenic) factors.They manifest themselves in the form of pollution of environmental components (air, vegetation, soil, water), in the form of accumulated industrial waste, and in the form of destruction of natural ecosystems.The main environmental risks in land use are the age-related frequency and intensity of extreme weather and climate events. Further development of land use should be carried out in accordance with the Concept of sustainable development of territories, principles of environmental safety of the society [5] and green economy [6,7].This development should be based on methodologies that consider land resources are the basis of biological life [8].The sequential relevant principles, taking into account the long-term and multi-value land resources, are also an important thing the land use should be based on.Sustainable land use is a long-term (maintenance of biotic regulation of the environment), multi-purpose (meeting the diverse needs of people) and cost-effective (optimal according to relevant indicators and criteria) relationship of society and land resources. The purpose of the study is to develop a methodology of sustainable land use in the conditions of modern environmental changes under the influence of anthropogenic stress. Results Methodological provisions of sustainable land use in industrial regions, according to the authors' opinion, include the following steps: -maintaining the necessary level of biotic regulation of the environment; -hierarchy of sustainable land use management levels: conceptual, ideological, political and economic; -substantiation of scientific and technical principles of sustainable land use in industrial areas [9].Biotic regulation of the environment in mining areas under the conditions of modern challenges and risks, expressed in the emergence of environmental threats, reflects the transformation of the biological energy-biomass.This happens in natural and anthropogenic channels and it reflects changes in the cycle of biogenic elements (C, O, H, K, etc.).In the natural ecosystems, before the start of a wide industrial production, people consumed from the environment 1-2 % of biological energy [8] and no changes were observed.In the early twentieth century, anthropogenic influence led to the removal of biological energy from the nature up to 5 %.As a result, there were significant negative changes in the environment.Nowadays, the increasing anthropogenic impact has formed a set of environmental threats, which are caused by the removal of more than 10% of bioenergy from nature. In some industrial regions (for example, in Ekaterinburg or in Nizhny Tagil) this figure raised up to 30 %. Visually, this is reflected in the increase in the area of disturbed areas (built-up, contaminated), in the increase in the proportion of areas of semi-destroyed territories (agricultural land, derived forests).The deterioration of environmental conditions takes place.Modern challenges and risks determine the corresponding features of the levels of sustainable land use management, among which there are conceptual, ideological, political, and economic. The conceptual level of management defines the main target settings for a long period of land use.From the ecosystem position, the concept of sustainable land use supposes the management of owners within the limits of the permissible change of biotic regulation of the environment.This means transformation of bioenergy in natural and anthropogenic channels and change of turnover of biogenic elements.Reasonable satisfaction of the needs of society in the results of land use is also supposed by this position.This refers to all types of land use: as a means of production (agricultural land and forest land), as a spatial basis (land settlements, industrial land and transport), and as a storeroom of minerals (subsoil use areas). The ideological level of sustainable land use management determines the main direction and ways of implementation of conceptual guidelines.The greening of public consciousness and the economy of land use is expressed in a deeper processing of grown and extracted natural resources.The conscious formation and regulation of consumer demand for products from them is also very important. The political level of sustainable land use management determines the formation of an appropriate conceptual legal framework to the ideological level.Its essence is to improve the legal documents; as well as the issue of differentiation of the concepts of "land" and "soil".Land is a broader concept than soil; it is a socio-economic phenomenon.The soil is a basic component of the natural environment.In the legislation of the Russian Federation there is no distinction of these concepts.In some countries (USA, China, Germany, France, Canada) they have already concluded that soil protection can be carried out only at the state level in the legislation of the legal term "soil". The economic level of sustainable land use management determines the mechanism of practical action of the company in the field of land relations through the assessment, cost, expenses, and profit.This can also be fulfilled through the implementation of the interaction of individual land users and society with land resources (soil, territory, vegetation, and underground resources).The solution of problems of the economic level of sustainable land use management is based on modern principles of inclusion legislative and executive bodies.Business communities also participate in the search for effective options based on the use of local and global information resources in the field of land use.This is done through the analysis of associative and causal relationships between different forms and types of land use, implementation of conceptual attitudes and ideological positions. The scientific and technical principles of sustainable land use in mining areas are proposed to include: -justification of strategic priorities and indicators of sustainable land use [10]; -comprehensive (ecological and economic) assessment of land resources with consideration of the peculiarities of the territories [11]; -definition of "corridors" of acceptable land use in specific climatic and socio-economic conditions [12]; -aligning the individual interests of land users with public preferences [13]; -multi-criteria optimization of land use on the basis of ecological, economic and social indicators [14]. We believe that the strategic priorities and indicators of sustainable land use have a clear priority: environmental, social and, finally, economic.In the old industrial regions of Middle and Southern Urals, they reflect the negative consequences of accumulated industrial waste [15], the nature of morbidity [7] and the need to maintain increasingly complex use of the subsoil [16].In the Northern and polar Urals, the strategic priorities have a direct interest in maintaining the traditional summary advantage from small nations of the North [17], [18] and sustainable subsoil use [19,20].In the regions of Western Siberia, this is a multi-purpose land use: subsoil use, development of industrial facilities and residential areas, forestry. Methodological tools for ecological-economic evaluation of land resources both component and integrated, is based on the natural characteristics (biometric and bioproduction).It is also based on technological and technical parameters, economic equivalents of these indicators, and defining comprehensive criteria taking widespread parameters into account.Transformation of lands under anthropogenic and natural impacts and long-term changes (natural resource use processes, the effect of accumulated damage), as well as the risks of various situations due to climate change, are presented in this work [11]. Assessment of the impact of global climate change on various forms and types of land use in the Urals and Western Siberia [21] based on the obvious results in the sectors of land use.For these sectors, first, the impact of climate changes the most critical (the amount of precipitation and river flow distribution to surface and groundwater component in the catchment).Second, the conclusions about the impact of climate change have acceptable validity.The examples of such changes can be changing the carbon-reducing role of land [22], a shift in the boundaries of plant formations to the North on the plains and up in the mountainous areas [23].Other examples include thawing areas of permafrost [24] and the transformation of the Northern forest-swamp systems [25]. As experience shows, the definition of "corridors" of acceptable land use is most often made by environmental parameters which include the level of conservation of natural ecosystems, the balance of natural and anthropogenic energy flows-biomass in the environment, the degree of withdrawal of natural resources and objects-vegetation, soil, land) [9,12].Social parameters include employment in the economy of the region, health status of the population is mentioned less often.Preservation of social functions of natural landscapes and economic parameters (technological, technical, cost, income) are also important.A number of specific goals of territorial planning in the industrial regions of the Urals (the goals of the branch of planning) have historically been solved within the framework of land and forest management, implementation of transport projects, solving problems of mining or hydraulic engineering construction.Environmental planning principles in these works were ignored or decided narrowly in the interests of industry planning.As a result, many of the industry projects have received a high-profile anti-ecological glory (Plant called "Uralasbest", Kachkanarsky ore mining and processing enterprise, etc.). The principle of consistency in the development of ore deposits is expressed in the developed technological platform (the author would like to emphasize the word technological).It includes many technological operations on the territory of administrative establishment.This is the system in the field of the subsoil use.It is considered here as "the organization of enterprises ... consuming resources from the outside... ".We describe the system of subsoil use enterprises at the present stage not only by the ЭКОНОМИЧЕСКИЕ НАУКИ Yu.V. Lebedev "external consumption of resources", but also by the technological, economic consequences of such consumption for the enterprise itself.This is also the case for the surrounding natural environment (violation of biotic regulation in the regions), for the society (the need to harmonize the interests of individual subsoil users with public preferences), and in the conditions of modern challenges and risks.A valid value of anthropogenic pressures in such areas greatly exceeds the theoretically allowed limits.The definition of acceptable "corridors" is determined through the value of the indicator called "environmental footprint".This indicator shows the number of conventional hectares of land needed to support a person's life with the current level of consumption and waste management, including the area needed to deposit СО 2 emissions.This indicator provides a simultaneous assessment of the environmentally sustainable development of the region (country); compares the development of production with the assimilation potential of the biosphere; it allows to determine the deviation from the "norm" in the socio-ecological and economic development of the region (country). The practical meaning of the indicator shows what it is necessary to strive for in order to implement sustainable land use on this territory.In practice, this means that it is necessary to close down existing enterprises, which cannot be done at the same time.Under these conditions, the authors propose fixing the state at the time of assessment, which is already determined by the mountain allotment or other types of land use, and the implementation of landscape planning of the entire administrative territory.Landscape planning (in all the variety of definitions) in this case is understood by us as a set of methodological tools and procedures used to build this spatial organization of activity in particular landscapes that would ensure sustainable nature management and preservation of the basic functions of the landscape as the life support systems [26].The estimated stage of landscape planning allows to obtain an objective assessment of the state of the existing natural conditions of the planning territory.The criteria recommended for such an assessment should meet the following requirements: -to be focused on the main objectives of the use of the territory in the conditions of equal priorities of preservation of ecological balance and sustainable socio-economic development; -to reflect the current state of the natural environment in both natural and modified ecosystems under the impact of economic activity; -to give an idea of possible changes in the state of individual natural components in the implementation of the main directions of use of the territory and the permissible level of such use [27]. The requirements are embodied in the categories of "values" and "sensitivity" of individual components of the natural environment.As a result of processing of all information at the output, a set of maps of industrial use of the territory is created, in which zoning of the territory by types of use is carried out.There are three types of goals: -conservation; -development; -improvement.After that, a map-concept of the use of the territory is created on the basis of the analysis of socio-economic problems (including maps of the real use of the territory).It identifies areas recommended for the preservation of the natural environment and socio-economic development.It also defines territories with the most acute environmental problems, for which specific measures are planned for the restoration of the landscape.Moreover, it specifies the directions of development of the territory.All the systems broken during the use are combined into one zone for the purpose of their improvement and restoration.The duration and technology of landscape restoration may vary depending on the nature and degree of degradation [27]. Coordinating interests in land usage in industrial Western Siberia and the Urals is an issue which holds a special position in methodology.In this region, the interests of public and private landowners often overlap.This overlap is apparent in the subsoil use areas. Here, the individual interests of subsoil users are manifested in relatively short periods of time (the duration of deposits development), and public preferences require the preservation of permanent, long-term subsoil use.The main features of the ratios of individual and public interests in subsoil use are shown in the Table 1. The procedure of individual interests' coordination and public preferences consists of: -consistent greening of the subsoil use economy: from its existing form in the form of income maximization (by reducing own expenses), first to accounting and discounting of external costs, and then to the economy of sustainable development ("green" economy) [6,7] with maximum consideration of environmental consequences and minimization of negative impacts (Table 2); -justification of the ratios of market discount rates for subsoil users and discount rates of social preferences [28]; -the study (definition) of concessions to the interests of individual subsoil users and the public preferences on the basis of the dependence criteria analysis from the options of deposits development [13]. In the current economic system, the market discount rates for subsoil use systematically exceed the public discount rate, which is due to the following reasons.First, individual subsoil users discount their economic income taking into account risks (economic, socio-political, and environmental).At the same time, the unreliability of ownership of the subsoil object (license areas) increases risks.Some risks of subsoil users are not risks for the company.They are associated with transfers within the company (transfer of rights, transfer of payments, etc. [29]).Subsoil users (private capital) are very reluctant to take risks in the implementation of scientific and technical projects characterized by unpredictability and uneven results [30].Secondly, subsoil users are guided by considerations of a limited (often relatively short) period of operation of the deposit [31] and they use high discount rates.General preferences deny differences in attitude to different periods (generations), so the discount period is long [32].The society should act as if the discount rate (reflecting the norm of temporary preferences) is at a minimum.Table 3 shows the values of rates and the discount rate (by the formula of compound interest) depending on the duration of the period of use of the subsoil plot [11].The search for optimal (effective) options for sustainable land use begins with the determination of optimal options for all particular criteria with the disclosure of the uncertainty of single-criterion solutions.For this purpose, a matrix of land use options in the zone of uncertainty of optimal solutions is compiled (Table 4). Disclosure of uncertainties is performed using specific criteria.The criterion of "average costs" in land use is determined by the maximum of the average values of the indicator P for each set of state parameters (vertical matrix): Taking risks and costs through guarantees, through budget financing, through targeted programs.The state is a major subject in the market of new technologies -Consequences -"conservation of natural benefits" -are not taken into account or taken into account to a small extent Interests assume: -employment in the region's economy; -long-term stabilization of the natural resource potential of the territory; -preservation of certain types of natural benefits (natural ecosystems) Table 2.The sequence of greening the economy for the conditions of subsoil use (the use of parts of subsoil).Таблица 2. Последовательность экологизации экономики для условий недропользования (с использованием участков недр). Type of discounted income Determination of the discounted income Present value of Э with internal cost for the period T, years When using the "minimax cost" criterion, choose a land-use option for which the worst result is better than the worst for any other option: Criterion (2), compared to criterion (1), insures against negative consequences in the most unfavorable implementation of the land-use management system. For particularly complex cases of land use organization, the rational option is chosen according to the "minimax risk" criterion.The difference Р ij is converted into the risk matrix R ij according to the ratio: The purpose of this criterion is to eliminate the risk of too much loss when extreme conditions of land use objects (climate change, flood risks, natural fires) appear. Multi-criteria optimization of sustainable land use is performed in relatively simple situations by lexicographical method, and in more complex cases -by successive concessions [14,33].For Figure the graphical interpretation of the justification of concessions a Р and a З to the max P criteria (maximum land use efficiency -the level of resource potential) and min З (minimum total costs) is given. The optimal option of land use is the solution of a two-criterion problem, solved in the following sequence: 1) Find max Р (Х 1 ; Y 2 ). 2) Find min З (Х 2 ; Y 2 ) at Р (Х; Y ) ≤ max Р (Х 1 ; Y 1 ) -a p .Summary Concern about the current challenges and risks of sustainable land use is an important social goal for land use planning with all the diversity of land use.By attention to this, the issue of land use management should become one of the priorities of environmental, social and economic policy.The practical solution of methodological support of sustainable land use should be solved on the basis of the fundamental scientific base and the latest achievements of science and practice.Thus, the account of modern challenges and risks (accumulation of environmental harm increasing frequency of intensity of extreme weather and climate events) in sustainable land use in mining areas is to implement the proposed methodological regulations.The enterprises planning and management of land resources should be carried out on the basis of the sustainable land use management levels hierarchy within the concept of bio-environmental regulation and landscape planning of the territory within the administrative boundaries. Interests (preferences) of the individual subsoil users and their consequences Interests (preferences) of the society in the field of natural resources Interest -a maximum profit for a relatively short period of subsoil use Considering the long-term nature of subsoil use in the interests of existence in the society Interest -maximum use of the most accessible types of useful resources; minimization of internal expenses Optimal use of all natural and resource potential of the territory (resources, environmental functions, and social role) Consequences -low efficiency of certain types of mineral resources in the market conditions Improving the efficiency of certain types of subsoil use that are little relevant to the market through the use of a set of natural resources Consequences -side and indirect effects of subsoil use are ignored Secondary and indirect effects are considered or are required to be considered Interest -lack of interest in using the profits of subsoil use for environmental protection and technical improvement of production Society's interest in the preservation of the natural environment, in the use of subsoil use profits in the creation and development of infrastructure for deep processing of extracted resources Consequences -high risks of adverse environmental and economic situations Reducing the impact of negative risks in the subsoil due to the change of species of wildlife, due to the summation of protective measures Interests (preferences) of individual subsoil users and their consequences Interests (preferences) of the society in the field of nature management Consequences -high risks for the development of breakthrough technologies, inability to overcome the threshold of synchronous costs for breakthrough technologies dn1 max P -dn2 max P -dn3 max P -dn4 З, standard units Graphical interpretation of the justification of concessions on the criteria of max P and min З.а -dependence of criterion P, %, on land use options; b -dependence of the total cost criterion З, standard units from land use options; c -the dependence between the criteria of total costs of З and P. Графическая интерпретация обоснования уступок по критериям max P и min З. Table 1 . The ratio of individual and public interests in subsoil use. Таблица 1. Соотношение индивидуальных и общественных интересов при недропользовании. et -environmental costs of production, including costs of prevention of harm to the environment (for treatment facilities) and economic damage from environmental pollution (payment for emissions of polluting substances); Р -a discount indicator; C t -external expenses; Y t -amount of long-term ecological and economic impacts over the period, much larger than Т. t value of subsoil use products; 3 t -expenses; T -time; t -discounting period; 3
5,542.6
2018-01-01T00:00:00.000
[ "Environmental Science", "Economics" ]
A Review on Practical Considerations and Solutions in Underwater Wireless Optical Communication Underwater wireless optical communication (UWOC) has attracted increasing interest in various underwater activities because of its order-of-magnitude higher bandwidth compared to acoustic and radio-frequency technologies. Testbeds and pre-aligned UWOC links were constructed for physical layer evaluation, which verified that UWOC systems can operate at tens of gigabits per second or close to a hundred meters of distance. This holds promise for realizing a globally connected Internet of Underwater Things (IoUT). However, due to the fundamental complexity of the ocean water environment, there are considerable practical challenges in establishing reliable UWOC links. Thus, in addition to providing an exhaustive overview of recent advances in UWOC, this article addresses various underwater challenges and offers insights into the solutions. In particular, oceanic turbulence, which induces scintillation and misalignment in underwater links, is one of the key factors in degrading UWOC performance. Novel solutions are proposed to ease the requirements on pointing, acquisition, and tracking (PAT) for establishing robustness in UWOC links. The solutions include light-scattering-based non-line-of-sight (NLOS) communication modality as well as PAT-relieving scintillating-fiber-based photoreceiver and large-photovoltaic cells as the optical signal detectors. Naturally, the dual-function photovoltaic–photodetector device readily offers a means of energy harvesting for powering up the future IoUT sensors. subsea military activities are examples of the growing need to explore the oceans for industrial, scientific, and military purposes.For instance, Saudi Aramco, which is the largest oil and gas company in the world, has over 43,000-km offshore oil pipelines to be monitored, thereby requiring an efficient, secure, and high-speed underwater wireless communication technology.Acoustic communication, which is the most common technology in underwater wireless communication, dates back to 1490 when Leonardo da Vinci suggested detecting ships in the distance by acoustic means [1].Today, studies of the physical layer of underwater acoustic communication have reached a certain level of maturity.Numerous sea trials have demonstrated such communication over tens of kilometers or beyond [2] and transmission rates of tens of kilobits per second or higher [3]- [7], the latter being a substantial advance on the few tens of bits per second in the early stage [8], [9].Acoustic-based video transmission has also been demonstrated [6]. Figure 1 shows the published experimental performance of underwater acoustic telemetry systems regarding their data rates versus their ranges, with a range-times-rate bound to estimate the existing performance envelope [10].As the physical layer verifications become proven, calls are emerging to integrate acoustic modems into networks.Some platforms (e.g., SUNRISE [11], LOON [12], and SWARMs [13]) require network technologies such as medium access control (MAC) [14], multiple input and multiple output (MIMO) [15], [16], localization [17], [18], route discovery [19], and energy harvesting [20].Considering the limited data rate of the acoustic method regardless of its maturity, the increasing need for high-speed underwater data transmission is driving the development of high-bandwidth communication methods.Radio-frequency (RF) technology typically delivers digital communication or full-bandwidth analog voice communication with rates of tens of megabits per second in terrestrial environments over the kilometer range [21].However, researchers are also attempting to deploy RF technology in unconventional environments, such as (i) underground to monitor soil properties and build underground networks [22], [23], and (ii) underwater to build underwater sensor networks.Though the considerable RF attenuation in water that increases drastically with frequency [24], there are still a few prior works on underwater RF communications [25]- [27].In these works, a long transmission distance is always achieved by sacrificing the bandwidth (40 m and 100 bit/s at 3 kHz) [26] or vice versa (16 cm and 11 Mbit/s at 2.4 GHz) [25].Table I summarizes the realizable ranges and data rates of underwater RF communication systems [24]. Given the limited performance of underwater acoustic and RF communication, underwater wireless optical communication (UWOC) has become a transformative alternative.Optical wireless communication (OWC) is data transmission in an unguided propagation medium through an optical carrier, namely ultraviolet (UV), visible, or infrared.Unlike the expensive, licensed, and limited electromagnetic spectrum in RF, the largely unlicensed spectrum (100-780 nm, or ~30 PHz) in OWC enables wireless data transmission at extremely high data rates of up to gigabits per second (Gbit/s) [28].In fact, the development of OWC has been ongoing since the very early years of human civilization.Signaling by means of beacon fires, smoke, ship flags, and semaphore telegraph can be considered as being historical forms of OWC [29].In 1880, Alexander Graham Bell invented the photophone based on modulated sunbeams, thereby creating the world's first wireless telephone system that allowed the transmission of speech [30].The recent development of high-speed power-efficient optoelectronic devices has offered the promise of OWC data rates of up to 100 Gbit/s [31] with transmission links of a few kilometers [32].Such devices include light-emitting diodes (LEDs) [33], superluminescent diodes [34], lasers diodes (LDs) [35], photodetectors [36], modulators [37], and the integration of these devices [38].Furthermore, because of the high energy efficiency of these high-speed optical emitters, OWC with dual functionality, such as light fidelity (Li-Fi), has been proposed for simultaneous lighting and communication purposes [39]. However, because of the complexity in aquatic environments, the early development of UWOC lagged far behind terrestrial OWC.The first experimental UWOC demonstration was made by Snow et al. in 1992, achieving a data rate of 50 Mbit/s over a 5.1 m water channel with a gas laser [40].In 2006, by using a 470 nm blue LED, Farr et al. achieved a 91 m UWOC link with a rate of 10 Mbit/s [41].The first gigabit (1 Gbit/s) UWOC system was implemented by Hanson et al. in 2008 using a diode-pumped solid-state laser [42].However, more considerations are needed for the physical layer of UWOC to mature, one being the selection of a light wavelength that is suitable for use underwater.In the presence of underwater microscopic particulates and dissolved organic matter in different ocean waters, absorption and multiple scattering cause irreversible loss of optical intensity and severe temporal pulse broadening, respectively [43], which in turn degrade the 3 dB channel bandwidth [44].Because of the low attenuation coefficients, blue-green light is preferable in clear and moderately turbid water conditions [45].For highly turbid water, the channel bandwidth can be broadened by using a red-light laser because of the lower scattering at a longer wavelength, as investigated numerically by Xu et al. [46].Based on that study, Lee et al. demonstrated the performance enhancement experimentally by utilizing a near-infrared laser; they showed that the overall frequency response of the system gains an increment of up to a few tens of megahertz with increasing turbidity [47].These investigations led to the demonstration of real-time ultra-high-definition video transmission over underwater channels with different turbidities [48]. Besides the selection of a suitable transmission wavelength, recent years have seen much consideration of modulation schemes, system configurations, and optoelectronic devices.Efficient and robust modulation schemes and system configurations such as orthogonal frequency-division multiplexing (OFDM) [49], pulse-amplitude modulation (PAM) [50], discrete multitone (DMT) with bits and power loading [51], and injection locking [52] are now used to achieve high data rates.Highly sensitive photodetectors such as photomultiplier tubes (PMTs) [53], single-photon counters [54], and multi-pixel photon counters are now used for long-haul communication [55].Figure 2 summarizes the recent advances of laser-based UWOC systems [40], [42], [46], [49]- [70].In that plot, the extinction length, which is defined as the product of the transmission range and the attenuation coefficient of the water channel, is used to normalize the effect of water turbidity. Despite the aforementioned previous investigations, if UWOC is to be used in real oceanic environments, then we must consider how UWOC systems are affected by oceanic turbulence.One of the main challenges with conventional UWOC systems is posed by the strict requirements on positioning, acquisition, and tracking (PAT).It is especially challenging to maintain PAT in the presence of oceanic turbulence because of optical beam fluctuations and, thus, misalignments.To build robust UWOC links to mitigate the effect of turbulence, we highlight herein our solutions, including non-line-of-sight (NLOS) UWOC modality, scintillating-fiber-based photoreceivers, and photovoltaic (PV) cells with a large active area as signal detectors to ease the PAT requirement.Furthermore, by using highly sensitive PV cells as photodetectors, we show simultaneous energy harvesting and signal detection in an underwater environment, thereby also providing solutions to the question of how to supply energy to an underwater data transceiver. II. OCEANIC TURBULENCE In the presence of oceanic turbulence, the optical signal suffers random variations that are commonly known as scintillations.This phenomenon is due to random changes in the refractive index along the path of propagation, which in turn causes random changes in the direction of photons traveling through the water medium.Because the active areas of commonly used photodetectors are small to ensure fast communication links, even slight variations in the direction of the beam can cause signal fading.Underwater turbulence, which can persist for a relatively long time, can be induced by variations in temperature, salinity, or pressure, and by air bubbles in the water channel.Understanding this turbulence-induced fading is critical to establishing long-distance yet stable UWOC links, which is the primary motivation for the vast amount of previous research into water turbulence and its effects on optical links.This research examined the statistical characteristics of underwater turbulence, its impact on the propagation of light, and potential techniques to mitigate those effects. One way to quantify the strength of the turbulence is to determine the scintillation index of the received signal, which is defined as the variance of the received normalized intensity and is expressed as where is the received intensity and 〈⋅〉 denotes the average taken over a long duration.High values of the scintillation index correspond to strong turbulence, which results in poorer performance of UWOC links. A study conducted in the Tongue of the Ocean in the Bahamas measured the refractive-index structure constant to quantify the strength of the turbulence [71].Other experiments have been conducted in emulated laboratory environments to statistically study the histogram of the received intensity in the presence of turbulence-induced fading caused by random and gradient changes in temperature and salinity, and by air bubbles in the water channel [72]- [74].In those studies, the experimentally obtained histograms were fitted with well-known statistical distributions, and the goodness of fit was reported in each case.Such statistical results allow underwater turbulence to be modeled in calculations and simulations and facilitate methods to counter the associated performance degradation.For example, a model was developed to produce a closed-form expression of the bit error ratios (BERs) in vertical underwater channels and was verified using computer simulations [75].Numerical calculations have also been used to study turbulence and to confirm that increasing the aperture size improves the performance under turbulence-induced fading [76].Similarly, it was also shown experimentally that using wider beams can improve the performance of UWOC links in the presence of air bubbles [77].Using beam expansions and aperture averaging is analogous to using spatial diversity in MIMO systems because the light beam travels through a wider space compared to a narrower beam.Moreover, spatial diversity can be achieved by using multiple transmitters.For example, the performance of a multiple-input single-output system has been evaluated [78], in which the transmitters were arranged in a uniform circular array, and it was shown that such a system improves the performance of UWOC links in turbulent water channels.A comprehensive study of the performance of MIMO systems has also been presented [79], and the performance of different wavelengths in the presence of temperature and salinity gradients has been studied [80].The results showed that the scintillation index decreases significantly with wavelength, which suggests improved performance by using longer wavelengths because they are more immune to scintillation.However, it is important to note the critical tradeoff between using longer wavelengths that suffer from higher attenuation and using shorter wavelengths that suffer from stronger turbulence-induced fading.Furthermore, the reciprocity of the effects of underwater turbulence on the UWOC performance has also been studied [81].The importance of the reciprocity of the channel lies in the fact that it alleviates the need for feedback to the transmitters to provide the channel state information in duplex links because they can extract it from the received signals. To show how turbulence affects the beam position, we used a quadrant detector sensor head (PDQ90A; Thorlabs) with its auto aligner cube (KPA101; Thorlabs) to monitor the change in beam position in the presence of a 0.1°C/cm temperature gradient.Figure 3 shows the relative position recorded over 100 s with a sampling rate of 1 kHz. Based on Fig. 3, we note that the beam position in the presence of turbulence changes randomly with time, thereby potentially degrading the performance of UWOC.We also note that the change on the horizontal axis (-1 -1) exceeds that on the vertical axis (-0.4 -0.4), this being due to the deformation of the beam by the vertical temperature difference, which gives the beam profile an oval shape.And the oval shape is mathematically related to the variance of the relative horizontal/vertical position, which is 0.06 for the horizontal and 0.02 for vertical. III. NON-LINE-OF-SIGHT UNDERWATER WIRELESS OPTICAL COMMUNICATION Because of the complexity of the oceanic environment, including turbulence [80], turbidity [82], and undersea obstacles [77], severe signal fading occurs if a misalignment of the optical link happens in line-of-sight (LOS) UWOC, leading to degraded information transfer.By contrast, NLOS UWOC [83], a modality that relieves the strict PAT requirements, promises robust data-transfer links in the absence of perfect alignment.An NLOS UWOC system relies on either reflection from the water surface [84] or light scattering [85] from molecules and particles in the water (e.g., plankton, particulates, and inorganics).Compared with reflection-based NLOS, that based on scattering is more robust because it avoids the possibility of signal fading from the wavy surface.Furthermore, to receive the signal, reflection-based NLOS requires a certain pointing angle to the water surface for making the reflection light travel into the field-of-view (FOV) of the receiver.Therefore, we focus herein on scattering-based NLOS, which entirely relieves the PAT requirements.In such links, the transmitting photons are redirected multiple times by the molecules in the water before being detected by the photoreceiver.Therefore, a light beam with high scattering properties is favorable in NLOS UWOC.Cox et al. measured the total light-scattering cross sections for microscopic particles against the entire visible spectrum [86].They showed that shorter wavelengths exhibit higher scattering for both Rayleigh and Mie scattering.Therefore, blue light (400-450 nm), which is the shortest visible wavelength, is preferred for use in NLOS UWOC.However, having constrained by the development of devices in general, previous works on NLOS UWOC mainly relied on simulations.Monte Carlo simulations [87] and the Henyey-Greenstein (HG) phase function [88] were used to develop models describing the transmitted photons' trajectory.The impulse response [85], BER performance [89], and the effects of channel geometry on path loss [83], [90] have also been predicted based on theoretical simulations.Herein, for the first time, we experimentally demonstrated a high-speed blue-laser-based NLOS UWOC system in a diving pool. In our pool deployment, we used as the transmitter a 450 nm blue LD (PL TB450B; Osram) operating at 0.18 A with an optical emission power of 50 mW enclosed in a remotely operated vehicle (ROV-1), and as the receiver we used a PMT (PMT R955; Hamamatsu) with a high sensitivity of 7×10 5 A/W carried by ROV-2.As shown in Fig. 4(a), the laser and PMT were separated by either 1.5 or 2.5 m.At the far end of the laser beam, a beam dump made of black silicon was used to minimize the light reflected from the pool wall and to ensure that all the received light was due to the scattering process.As shown in Fig. 4(b), the laser and PMT pointed in parallel to fully relieve the alignment requirements.At the transmitter side, an alternating current (AC) signal was generated by a pattern generator (ME522A) with a pseudorandom binary sequence that was 2 10 −1 pattern modulated with non-return-to-zero on-off keying (NRZ-OOK).The PMT was operated at 15 V with a high voltage-controller voltage of 2 V, and an OD2 neutral-density filter was placed in front of the PMT window to control the incident power within the detection range of the PMT.The water was pool water with an absorption coefficient of 0.01 and a scattering coefficient of 0.36 m −1 .Figure 5 shows that a data rate of 48 Mbit/s was achieved with a BER of 2.6×10 −3 when the transmitter-receiver separation distance was 1.5 m, which is below the forward error correction (FEC) limit of 3.8×10 −3 .Meanwhile, for the separation distance of 2.5 m, a maximum data rate of 20 Mbit/s was obtained with a BER of 2×10 −4 .The corresponding eye diagrams are shown in Fig. 6.Upon increasing the data rate, the eyes become closer, inducing higher BER.Besides, compared with the eyes for the separation distance of 2.5 m, those for 1.5 m are noiseless.This is due to the weaker received light and increased inter-symbol interference caused by the multipath scattering with greater separation.Nevertheless, we have demonstrated, for the first time, a high-speed NLOS UWOC link with the PAT requirements fully relieved by using a blue laser.Furthermore, we envisage that a longer-haul NLOS UWOC could be developed in the future based on photon-counting modes using algorithms for pulse-counting, synchronization, and channel estimation [91]. IV. OMNIDIRECTIONAL FIBER PHOTODETECTOR WITH LARGE ACTIVE AREA Paving the way for the upcoming era of the Internet of Underwater Things (IoUT), developments on the transmitter side have enabled transmission of up to gigabits per second in underwater environments [92].However, on the receiver side, the small detection area of conventional photodiodes impedes the practicality in this regard.Although commercial photodiodes have demonstrated high modulation bandwidths of up to gigahertz, the detection areas of these photodiodes are limited to only a few square millimeters.This is largely attributed to the resistance-capacitance limit of the photodiode [93].Considering the severe conditions in underwater environments and to relieve the strict PAT requirement, large-area photoreceivers with higher modulation speeds are essential for both practicality and to improve the connectivity among trillions of IoUT devices. Scintillating fibers, which rely on the photon conversion process of the doped molecules in the fiber to propagate the converted light to the fiber end, were used as the optical receivers for corona discharges in early work [94], [95].Having similar working principles to those of luminescent solar concentrators [96]- [99], scintillating fibers rely on the doped molecules in the core of the fiber to absorb the incoming light and re-emit it at a longer wavelength.The re-emitted light then propagates effectively along with the core of fiber to the fiber end.The first demonstration of using scintillating fibers as the photoreceiver for free-space optical communication (FSO) was reported by Peyronel et al. in 2016 [100].The design was devised for indoor visible-light communication under eye-safe conditions.The advantages of scintillating fibers include the flexibility to form large-area photoreceivers of various sizes with no significant deterioration in response speed.Inspired by these prior studies, we aim to demonstrate the fundamental potential of scintillating fibers as large-area photoreceivers for UV-based UWOC.Compared to traditional photodiodes, this would eventually improve the practicality of UWOC in actual ocean environments with a large angle of view and omnidirectional detection [101]. As a proof of concept, a large-area photoreceiver made of commercially available scintillating fibers was constructed, as shown in Fig. 7.As shown in Fig. 7(a), the photoreceiver comprises around 90 strands of scintillating fibers and thus forms a planar detection area of roughly 5 cm 2 .To demonstrate the modulation capabilities of the scintillating-fiber-based photoreceiver, we used a 375 nm UV LD (NDU4116; Nichia) as the transmitter to send a modulated optical signal over a 1.5-m-long water channel.The photoreceiver was placed at the other end of the water tank, and the strands of the fiber end were coupled into a commercial avalanche photodetector (APD) (APD430A2; Thorlabs) through a series of condenser lenses.Figure 7(b) shows the collimated UV light beam incident on the planar detection area of the large-area scintillating-fiber-based photoreceiver.It is apparent that the photoreceiver is sufficiently large to cover the entire profile of the collimated beam with no additional lenses.In addition, the small-signal frequency response of the large-area scintillating-fiber-based photoreceiver was tested over the same water channel.Figure 8 shows the small-signal frequency response of the photoreceiver with a 3-dB bandwidth of 91.91 MHz, which is relatively high compared to a conventional photodiode with the same detection area.The modulation bandwidth is primarily governed by the recombination lifetime of dye molecules [100], [104], and thus eliminating the need to balance the design trade-off between the detection area and the modulation bandwidth as in conventional photodiodes.Besides, with a conventional photodiode, although the angle of view can be improved by using additional receiver lenses, it is challenging to attain flexibility and omnidirectional detection.The inset of Fig. 8 shows a photograph of the large-area scintillating-fiber-based photoreceiver with high flexibility to form a spheroid-like photoreceiver for omnidirectional detection. Moreover, the data rate of the scintillating-fiber-based photoreceiver in an underwater communication link was tested by modulating the 375 nm UV laser.The transmitter was connected to a BER tester (J-BERT N4903B; Agilent) for OOK signal generation.The signals were transmitted through a 1.5-m-long water channel to the scintillating-fiber-based photoreceiver before coupling into an APD.The APD was then connected back to the J-BERT.Figure 9(a) and (b) show the eye diagrams and corresponding BER below the FEC limit at 150 Mbit/s and a maximum attainable rate of 250 Mbit/s, respectively.Thus, the potential of a large-area and high-bandwidth scintillating-fiber-based photoreceiver is shown for establishing UV-based data transmission in underwater channels.By using a more-complex modulation scheme (e.g., PAM, OFDM, DMT) coupled with bit-loading and pre-equalization techniques, higher data rates of up to gigabits per second could be expected with the large-area scintillating-fiber-based photoreceiver. Table II summarizes the photodetection techniques used in UWOC.Comparatively, the photodetection scheme based on scintillating fibers offers large modulation bandwidth as compared to other prior works, without sacrificing the detection area.Moreover, as compared to conventional photoreceivers based on Si-based photodiodes [46], [59], [67], [102] and solar panels [103], the use of scintillating fibers render large area detection while preserving the modulation bandwidth of the accompanying Si-based photodiode.This could also alleviate the costly and timely development path for a UV-based photoreceiver with a large detection area and high response speed [104]- [107].Hence, the approach can accelerate the realization of UV-based NLOS communication modality to obviate the strict PAT requirements in UWOC. V. PHOTOVOLTAIC CELLS FOR SIMULTANEOUS SIGNAL DETECTION AND ENERGY HARVESTING Following the vigorous development of information technology and popularization of the IoUT concept, energy issues have become a bottleneck for power-hungry UWOC devices.To support the underwater equipment for massive data processing and long-distance communication, it is essential to develop and use sustainable energy resources and explore advanced energy-storage technologies.As a renewable and green energy, solar energy is undoubtedly an alternative to resolve these energy issues.In recent years, PV cells, which are increasingly popular alternatives to traditional photodetectors, have been studied extensively in the field of OWC [108]- [113].and bandwidth by using various novel PV cells.In [111], 34.2-Mbit/s signals were received by using organic PV cells and a red laser over a 1-m air channel.In addition to using LEDs or lasers operating in visible-light band and silicon wafer-based PV cells for OWC [108]- [111], researchers also employed mature near-infrared laser sources and GaAs PV cells to implement efficient energy harvesting and high-speed FSO communication [113].However, an essential prerequisite for realizing long distances and high speeds is strict alignment, which limits the use of conventional OWC for mobile underwater platforms.Consequently, work remains lacking in resolving the above issues.Inspired by these previous studies, PV cells with the dual functions of signal acquisition and energy harvesting show good prospects for application in energy-hungry marine environments.In Ref. [103], the authors first stressed the importance of PV cells for UWOC to resolve underwater energy issues in underwater environments.Considering the complexity in underwater channels, the authors also highlighted the superiorities of PV cells with large detection areas, which can significantly alleviate the alignment issues caused by mobile transmitters and receivers.To promote the application of PV cells in practical UWOC scenarios.In the following, we use a white laser with a large divergence angle for simultaneous lighting and optical communication in UWOC [121].We explore PV cells with large detection areas that are capable of detecting weak light, which can alleviate the alignment issues and lays the foundation for future implementation of long-distance underwater communication. Figure 10 shows the schematic of the PV-cell-based UWOC system.Because the measured bandwidth of the PV cell is only around 290 kHz, a highly spectral-efficient modulation format (i.e., OFDM) was used in the experiment to improve the data rate.The OFDM signals were generated offline.The bit number of the pseudorandom binary sequence was 2 20 VI. FUTURE WORK Beyond the challenges and solutions mentioned above, areas remain that require extensive investigation in practical UWOC deployment.An example is the physical layer of UWOC, which still requires considerable effort before networking construction.Apart from the required compact, high-speed, and low-power optoelectronic devices, solid understandings and further investigations are needed urgently of water channels and modem algorithms, along with both analytical and computational exploratory studies.Higher layers of networking technologies are also in demand, which includes MAC, localization, route discovery, and multihop communication.Furthermore, a low-power and compact computing technology is a major consideration when designing a practical UWOC system for field deployment.This includes digital signal processors (DSPs), field-programmable gate arrays (FPGA), and future general-purpose computing platforms. VII. CONCLUSIONS While UWOC offers high-speed data transfer and complements the existing RF and acoustic technologies, its ultimate performance is affected by the complex underwater environment.The main concerns are (i) alignment loss under oceanic turbulence and (ii) energy supply for power-hungry underwater devices.The oceanic turbulence, which is induced by temperature or salinity gradients in the waters, will cause time-varying characteristics of seawater channels, and thus result in a severe distortion of received signals with large pointing errors, and even failure of communication.However, such PAT issues and energy harvesting in underwater environments can be addressed by several novel system configurations and device innovation.NLOS UWOC, by taking advantage of underwater light scattering, significantly eased the PAT requirements.The demonstration of a 20-Mbit/s/2.5-mblue-laser-based NLOS UWOC proves the feasibility of alignment-free optical communication by establishing an NLOS UWOC link.Besides innovating the communication system configuration, it is also promising to mitigate such pointing errors by using novel photodetectors with a large active area. The study of 250-Mbit/s/1.5-mscintillating-fiber-based photoreceiver link with a 5-cm 2 active area, shows the capability of simultaneously easing the alignment issues while still maintaining high-speed communication.The PV cell, with a large active area of 36-cm 2 as a photoreceiver, on the other hand, shows great potential for simultaneous signal detection and energy harvesting for underwater sensors.Beyond the considerations and solutions mentioned, there are other core areas of research interest for field deployment, such as theoretical models and algorithms for randomly varying water channels, higher layers of networking technologies, and a low-power computing system for underwater environments.It can be envisaged that the above comprehensive suite of technologies may soon revolutionize the technology of underwater communication to meet the demands for comprehensive undersea interconnectivity under the framework of IoUT. Fig. 2 . Fig. 2. Plot of data rate versus range (in terms of extinction length) of recent experimental work on laser-based UWOC. Fig. 4 . Fig. 4. (a) Pool testbed for deployment of 450 nm laser-based non-line-of-sight (NLOS) UWOC modality based on two ROVs.(b) Photograph of transmitter and receiver pointing in parallel direction to form an NLOS configuration. Fig. 3 .Fig. 5 . Fig. 3. (a) Beam position on the receiver side with no temperature gradient.(b) Beam position on receiver side with a 0.1-°C/cm temperature gradient. Fig. 8 . Fig. 8. Measured small-signal frequency response of large-area scintillating-fiber-based photoreceiver over a 1.5-m-long water channel.The inset shows a photograph of the spheroid-like omnidirectional scintillating-fiber-based photoreceiver. Apart from harvesting the energy through the direct current (DC) component of the light source, a PV cell can also convert AC signals superimposed on the light source back to electrical signals for signal detection.Table III summarizes the communication performance of several OWC systems based on different kinds of PV cells.Most of the previous works on PV cells for OWC have been focused on improving the data rate Fig. 9 . Fig. 9. Received BER and eye diagrams at: (a) 150 Mbit/s and (b) 250 Mbit/s over a 1.5-m-long water channel using a 375-nm UV laser as the transmitter. − 1 . The size of the inverse fast Fourier transform was 1024.The number of efficient subcarriers and subcarriers for the frequency gap near DC were 93 and 10, respectively.The number of OFDM symbols was 150, including four training symbols for channel equalization and two for timing synchronization.The cyclic prefix number was 10.Four-quadrature amplitude modulation (4-QAM)-OFDM signals were sent from an arbitrary-waveform generator (AWG) with a sampling rate of 5 MHz.After being adjusted by an amplifier (APM) and an attenuator (ATT), the OFDM signals were superposed on a white LD via a bias tee.Over a 2.4 m transmission distance in the diving pool, the optical signals were detected by a PV cell with a detection area of 36 cm 2 (6 cm × 6 cm).Note that the experiment was conducted in daytime, and thus the main background noise is attributed to sunlight and the underwater channel.To separate the AC signals from the DC signals, a receiver circuit was designed for the PV cell.Besides, an amplifier and a filter were included to amplify the signals and filter the noise outside of the detection band.Finally, the signals were captured by a mixed-signal oscilloscope with a sampling rate of 25 MHz and processed offline.After transmission through the 2.4 m underwater channel, the achieved gross data rate of the OFDM signals was 908.2 kbit/s.The constellation map of the received 4-QAM-OFDM signals is shown in Fig. 11, which is well converged.The corresponding BER was 1.010×10 −3 . Fig. 10 . Fig. 10.Schematic of UWOC system based on a PV cell. Yujian Guo (S'18) received a Bachelor's degree in electrical engineering from the University of Electronic Science and Technology of China, Chengdu, Sichuan, China, in 2017.He is currently a Ph.D. student in the Department of Computer, Electrical and Mathematical Sciences & Engineering, KAUST, Kingdom of Saudi Arabia.His current research interests include underwater wireless optical communication and underwater optical channel characterization.Mustapha Ouhssain received a B.Sc. degree in chemistry from Université Montpellier 2, France (2006) and an M.S. degree in Sciences, Technology and Marine Environments from Toulon University, Toulon, France.He is now a Laboratory Engineer in the Red Sea Research Center, KAUST, Saudi Arabia.His research interests include ocean optics, analytical services, and field and laboratory analysis of marine environments.Yang Weng (S'15) received a B.S. degree (2015) from the Ocean University of China, Qingdao, China, and an M.S. degree (2018) from the National Taiwan University, Taiwan, China.From 2018 to 2019, he was a visiting student at KAUST.His research interests include underwater wireless optical communication and the navigation of autonomous underwater vehicles.Burton H. Jones received his Ph.D. degree from Duke University.He is now a Professor of Integrated Ocean Processes at KAUST.His current interests include biological oceanography, physical and biological interactions, ocean optics, coastal urban issues, and integrated observation and modeling.Tien Khee Ng (SM'17) received his Ph.D. (2005) and M.Eng.(2001) from Nanyang Technological University (NTU), Singapore.He is a senior research scientist at KAUST, and a co-principal-investigator responsible for innovation in MBE-grown nanostructures and devices at the KACST's Technology Innovation Center at KAUST.His research focuses on the fundamental and applied research of wide-bandgap group-III nitride, novel hybrid materials and multi-functional devices for efficient light-emitters, optical wireless communication and energy harvesting.He is also a senior member of OSA, and members of SPIE and IOP.Boon S. Ooi is a Professor of Electrical Engineering at KAUST.He received his B.Eng. and Ph.D. in electronics and electrical engineering from the University of Glasgow.His research focuses on the study of semiconductor lasers, LEDs, and photonic integrated circuits for applications in energy efficient lighting and visible-light communication.He has served on the editorial board of IEEE Photonics Journal, and on the technical program of IEDM, OFC, CLEO and IPC.Presently, he is the Associate Editor of Optics Express (OSA) and Journal of Nanophotonics (SPIE).He is a Fellow of OSA, SPIE and IOP (U.K.).Besides, he is also a Fellow of the U.S. National Academy of Inventors (NAI). A Review on Practical Considerations and Solutions in Underwater Wireless Optical Communication Xiaobin Sun, Student Member, IEEE, Chun Hong Kang, Student Member, IEEE, Meiwei Kong, Member, IEEE, Omar Alkhazragi, Student Member, IEEE, Yujian Guo, Student Member, IEEE, Mustapha Ouhssain, Yang Weng, Student Member, IEEE, Burton H.Jones, Tien Khee Ng, Senior Member, IEEE, Boon S. Ooi* O Fig. 1.Data rate versus transmission range of published experimental work on underwater acoustic systems (from [10]). Xiaobin Sun (S'19) received a B.S. degree in semiconductor physics from the University of Science & Technology, Beijing and an M.S. degree in photonics from King Abdullah University of Science & Technology (KAUST).He is currently working toward a Ph.D. degree at KAUST.His research interests include underwater wireless optical communication and free-space optical and visible-light communication.Meiwei Kong (M'18) received a B.S. degree in Material Physics from Zhejiang Normal University, China, and a Ph.D. degree in Marine Information Science and Engineering from the School of Ocean College of Zhejiang University.She is a postdoctoral researcher at KAUST.Her research interest is underwater wireless optical communication.Omar Alkhazragi (S'19) received the degree of Bachelor of Science in Electrical Engineering in 2018 from King Fahd University of Petroleum and Minerals (KFUPM), Saudi Arabia.He is now an M.S./Ph.D. student in electrophysics in the Photonics Laboratory at KAUST, Saudi Arabia.The primary focus of his research is on experimental and theoretical studies of optical wireless communication systems.
7,791
2020-01-15T00:00:00.000
[ "Engineering" ]
A new approach to handle curved meshes in the hybrid high-order method The hybrid high-order method is a modern numerical framework for the approximation of elliptic PDEs. We present here an extension of the hybrid high-order method to meshes possessing curved edges/faces. Such an extension allows us to enforce boundary conditions exactly on curved domains, and capture curved geometries that appear internally in the domain e.g. discontinuities in a diffusion coefficient. The method makes use of non-polynomial functions on the curved faces and does not require any mappings between reference elements/faces. Such an approach does not require the faces to be polynomial, and has a strict upper bound on the number of degrees of freedom on a curved face for a given polynomial degree. Moreover, this approach of enriching the space of unknowns on the curved faces with non-polynomial functions should extend naturally to other polytopal methods. We show the method to be stable and consistent on curved meshes and derive optimal error estimates in $L^2$ and energy norms. We present numerical examples of the method on a domain with curved boundary, and for a diffusion problem such that the diffusion tensor is discontinuous along a curved arc. Introduction In recent years, there has been a trend in the computational literature towards arbitrary order polytopal methods for the approximation of partial differential equations.Such methods have a greater flexibility in the mesh requirements and can capture more intricate geometric and physical details in the domain.Being of arbitrary order, they also benefit from better convergence rates with respect to the global degrees of freedom.A short list of such methods includes discontinuous Galerkin and hybridizable discontinuous Galerkin methods [13,18,24], virtual element methods [1,6,9,15], weak Galerkin methods [29], and polytopal finite elements [36].However, it is well known that any approximation method on a polytopal mesh of a smooth domain (i.e. with a first order representation of the boundary) will yield at best an order two convergence rate [35,37].Thus, any high-order method on curved domains requires a high-order (or exact) representation of the boundary for optimal convergence. Developed in [23,25], hybrid high-order (HHO) schemes are modern polytopal methods for the approximation of elliptic PDEs.A key aspect of HHO is its applicability to generic meshes with arbitrarily shaped polytopal elements.This article focuses on the extension of HHO methods to allow for curved meshes, with unknowns that capture the geometry exactly, yet still achieve optimal convergence.While the approach is presented within an HHO framework for a diffusion problem, the key ideas are more general and can be extended to related polytopal methods and to other models such as linear elasticity, or the Stokes and Navier-Stokes equations. There has been much work on the development of discontinuous Galerkin (DG) methods on curved meshes [12,14,28].We also make note of the article [31] which analyses several approaches to high-order finite element methods on curved meshes.However, for the aforementioned methods the problem is much simpler than for hybrid high-order methods due to the lack of unknowns on the mesh faces.The addition of unknowns on the mesh faces is one of the key benefits hybrid methods have over DG and other non-hybrid methods due to the strong enforcement of boundary conditions and the reduction of the global degrees of freedom via static condensation [22,Appendix B.3.2]. The article [5] proposes a virtual element method (VEM) in two dimensions for meshes possessing curved edges.For each curved edge the authors consider the space of polynomials on a linear reference segment in R and map this space onto the curved edge via a sufficiently smooth parameterisation.A similar approach is taken in the articles [4,21].A typical approach for hybridizable discontinuous Galerkin (HDG) methods on curved domains is to map the boundary data onto a polytopal sub-domain [cf.19,20].We make note of the articles [32,33] which also use this approach to curved boundaries. While there has been some work on the development of hybrid high-order methods on curved meshes [8,10,11], the approach we take in this paper is quite different.Indeed, the article [8] approaches the issue of defining unknowns on curved faces by considering a polynomial mapping from a planar reference face onto the curved face.While this naturally requires the mesh faces to be polynomial, it also reduces the approximation order [7].Indeed, if the mapping onto the face has effective mapping order m (see [8,Equation (5) & Remark 1]), then defining face unknowns of degree l in the reference frame will yield approximation properties of at best order l m [8,Equation (8)].To recover the optimal approximation order observed for straight meshes, the degree of the face polynomials in the reference frame is increased by a factor of m, yielding a very large global stencil for high-order mappings.Moreover, approximation properties in the curved faces are unknown, and the authors assume them to be true [cf.8, Equation (9)] in order to obtain optimal error estimates.We also make note of the conference proceedings [34] which follows the same approach using reference frame polynomials to define unknowns on curved faces for a HDG method. An alternative approach, first considered in [10,11], is to increase the polynomial degree of the element unknowns and weakly enforce the boundary or interface conditions without defining any unknowns on curved faces.This procedure has also been implemented for a fourth order bi-harmonic problem in a curved domain [26].Such an approach ensures stability of the system and that optimal convergence rates are achieved.However, this method does not capture the geometry exactly and requires a finely tuned Nitsche parameter to achieve stability and consistency [cf.30].Moreover, without unknowns defined on curved faces, it is not clear how to design an enriched method such as that proposed in [38], whereas the method devised in this paper works seamlessly with enrichment. In this paper we take inspiration from the article [38] and consider unknowns on the faces to include the Neumann traces of higher order polynomials.We note that this approach does not consider reference elements or faces but rather directly defines non-polynomial spaces on curved faces.Such an approach is therefore more closely analogous to an enriched or extended method than it is to any of the previously mentioned methods of defining unknowns on curved faces.Using this approach, we are not restricted to consider polynomial faces, but can rather take any C 1 manifold.Moreover, the number of degrees of freedom on curved faces is strictly bounded above and does not grow arbitrarily large for high-order mappings.We are able to prove consistency of the scheme, and, by including the space of constant functions on the faces the method is shown to be stable.In Section 3 we prove optimal error estimates in energy and L 2 -norm, and in Section 5 we present a method for the design of quadrature rules on curved elements.The paper is concluded with some numerical tests in two dimensions in Section 6. Model and Assumptions on the Mesh We take a domain Ω ⊂ R d , d ≥ 2, and consider the Dirichlet-diffusion problem: where a(u, v) := (K∇u, ∇v) Ω and (v) := (f, v) Ω for some source term f ∈ L 2 (Ω) and diffusion tensor K assumed to be a symmetric, piecewise constant matrix-valued function satisfying, for all for two fixed real numbers 0 < K ≤ K.Here and in the following, (•, •) X is the L 2 -inner product of scalar-or vector-valued functions on a set X for its natural measure.We shall also denote by Let H ⊂ (0, ∞) be a countable set of mesh sizes with a unique cluster point at 0. For each h ∈ H, we partition the domain Ω into a mesh M h = (T h , F h ), where T h denotes the mesh elements and F h the mesh faces. We suppose that the mesh elements T h are a disjoint set of bounded simply connected domains in R d with piece-wise C 1 boundary ∂T .We further suppose that Ω = T ∈T h T . We suppose that the mesh faces F h are a disjoint set of non-intersecting, finite, (d − 1)dimensional C 1 manifolds which partition the mesh skeleton: T ∈T h ∂T = F ∈F h F , and that for each F ∈ F h there either exists two distinct elements T 1 , T 2 ∈ T h such that F ⊂ ∂T 1 ∩ ∂T 2 and F is called an internal face, or there exists one element T ∈ T h such that F ⊂ ∂T ∩ ∂Ω and F is called a boundary face.Interior faces are collected in the set F i h and boundary faces in the set F b h .The parameter h is given by h := max T ∈T h h T where, for X = T ∈ T h or X = F ∈ F h , h X denotes the diameter of X.We shall also collect the faces attached to an element T ∈ T h in the set F T := {F ∈ F h : F ⊂ T }.The unit normal to F ∈ F T pointing outside T is denoted by n T F , and n T : ∂T → R d is the unit normal defined by (n T )| F = n T F for all F ∈ F T .We note that as each F is C 1 , the normal n T F is well defined.It is also worth noting that the normal vector n T F will not be constant on curved faces. We consider the following regularity assumption on the mesh elements. Assumption 1 (Regular mesh sequence).There exists a constant > 0 such that, for each h ∈ H, every T ∈ T h is connected by star-shaped sets with parameter (see [22,Definition 1.41]). Remark 1 (Assumptions on the mesh).Assumption 1 is taken from [27, Assumption 1] however we have removed the assumption that the faces are connected by star shaped sets.We shall also note that there is no requirement that the mesh elements be polytopal or for the mesh faces to be planar. We further require that the elements of the mesh align with the discontinuities of the diffusion tensor, i.e., for each T ∈ T h , K| T := K T is a constant matrix.In an analogous manner to (1.2) we define quantities 0 < K T ≤ K T to satisfy and we define the local diffusion anisotropy ratio α T := K T K T . From hereon we shall write f g if there exists some constant C which is independent of the quantities f and g, the mesh diameter h, and of the diffusion tensor K, such that f ≤ Cg. Under Assumption 1 the following continuous trace inequality holds: for all v ∈ H 1 (T ), A proof of (1.4) is provided in [27].We note that no assumption on T being polytopal is required.We also note the following inverse Sobolev inequality, a proof of which is provided for highly generic and potentially curved elements in [12,Lemma 4.23]: where we denote by P (T ) the space of polynomials on T of degree ≤ , ∈ N. Combining (1.4) and (1.5) yields the following discrete trace inequality: Discrete Model A standard hybrid high-order method on polytopal meshes defines the local discrete space as This makes sense on polytopal meshes where F is a (d−1)-dimensional hyperplane as there is no ambiguity in what is meant by P k (F ).Indeed, on such meshes it holds that P k (F ) = P k (Ω)| F = P k (Ω) d • n F .On curved meshes, it is not so obvious what the discrete space should be.We find that the appropriate local discrete space is that of where we define and n F is an arbitrary unit normal to the face F .The choice of unit normal n F does not affect the definition of P k (F ).We note that, even for a curved face, there is no ambiguity in the term P 0 (F ) as it represents the space of functions which are constant on the face F .We emphasise that as the unit normal n F is not constant, the space P k (F ) will be non-polynomial on curved faces. Remark 2. If F is planar (that is, a (d−1)-dimensional hyperplane) then it holds that P k (F ) = P k (F ) and thus the discrete space in (2.1) coincides with the usual HHO space. Remark 3. It suffices to take the space of unknowns on the faces as for stability and consistency to hold.However, we define the space as P k (F ) = P 0 (F ) + P k (Ω) d • n F for simpler implementation and robustness of more general models. We shall denote by P k (F T ) the space and for a given We denote by π 0,k T : L 1 (T ) → P k (T ) and π 0,k F : L 1 (F ) → P k (F ) the L 2 -orthogonal projectors onto the spaces P k (T ) and P k (F ) respectively.We denote by π 1,k+1 K,T : L 1 (T ) → P k+1 (T ) the oblique elliptic projector onto the space P k+1 (T ) satisfying The following weighted inner-products and norms are taken from [27].The weighted innerproduct (•, (2.5) T ∇v| H r−1 (T ) d . (2.6) Lemma 1 (Approximation properties of π 1,k+1 K,T ).For all s = 1, . . ., k + 1 and v ∈ H k+2 (T ), Proof.A proof is provided by [27,Lemma 9].While that particular proof assumes the elements are polytopal, the proof only relies on [22, Theorem 1.50] which is provided for generic elements connected by star-shaped sets. The interpolant T is defined by where π 0,k Lemma 2. The following commutation property holds: (2.9) Proof.It follows from the definitions of p k+1 K,T and where in the last two equalities we have integrated by parts and introduced the oblique elliptic projector using equation (2.4a).Taking Remark 4. We note that the commutation property (2.9) is the key result required to prove consistency of the scheme and relies on the fact that The additional condition that P 0 (F ) ⊂ P k (F ) is required for coercivity to hold. We endow the discrete space U k T with the seminorm The local bilinear form a K,T : where s K,T : U k T × U k T → R is a local stabilisation term such that the following assumptions hold. Assumption 2 (Local stabilisation term).The stabilisation term s K,T is a symmetric, positive semi-definite bilinear form that satisfies: 1. Stability and boundedness.For all v T ∈ U k T , An example of a stabilisation bilinear form satisfying Assumption 2 is provided in Section 4. Lemma 3 (Consistency of s K,T ).Suppose s K,T : (2.14) Therefore, applying the upper bound in (2.12) and the definition (2.10) of Thus, we infer from Lemma 7 below that The proof follows from the approximation properties (2.7) of π 1,k+1 K,T . Global Space and HHO Scheme The global space of unknowns is defined as To account for the homogeneous boundary conditions, the following subspace is also introduced, For any v h ∈ U k h we denote its restriction to an element The HHO scheme reads: find where h : U k h,0 → R is a linear form defined as We define the discrete energy norm Thus, if v h a,K,h = 0 then it must hold that v T = v F = const for every T ∈ T h , F ∈ T h .However, we infer from the homogeneous boundary conditions that those constants must all be zero. Error estimates Theorem 5 (Consistency error).The consistency error E h (w; If such a w additionally satisfies w| T ∈ H k+2 (T ) for all T ∈ T h , the consistency error satisfies The global operators p k+1 K,h : such that their actions restricted to an element T ∈ T h are that of p k+1 K,T and π 1,k+1 K,T .The global interpolator Theorem 6 (Energy and L 2 error estimates).Let u ∈ H 1 0 (Ω) be the exact solution to equation (1.1) and suppose the additional regularity u ∈ H k+2 (T h ).Let u h be the exact solution to the discrete problem (2.18).Then the following error estimates hold: • L 2 estimate.Suppose additionally that the domain Ω is convex and K = I is the identity matrix, then optimal convergence in L 2 -norm holds: Proof of Theorems 5 and 6.The estimates (3.1) and (3.2) are provided in [27] and rely only on the design conditions stated in Assumption 2, the commutation property (2.9), the approximation properties of the elliptic projector (2.7), the consistency of s K,T (2.14), Lemma 4, as well as standard trace and inverse estimates provided in Section 1.1. To prove (3.3) we require a slightly different approach to that of [22,Theorem 2.32].In particular, as π 0,k F is not a polynomial projector [22, Equation (2.78)] does not hold in our case.However, the remainder of the proof is the same so we only have to show that sup where z g is the solution to the dual problem a(v, z g ) = (g, v) Ω ∀v ∈ H 1 0 (Ω).As we have assumed Ω to be convex, the following elliptic regularity holds: Moreover, as K = I, the following equality established in the proof of [22,Lemma 2.18] holds true: The sum over the boundary term in (3.6) can be written as follows, As ∇π 1,k+1 K,T u • n T F ∈ P k (F ) we may drop the projector π 0,k F to write As ∇u ∈ H(div; Ω), the fluxes of u are continuous across every internal face F ∈ F i h .Therefore, as π 0,k F z g = 0 for all F ∈ F b h (due to z g = 0 on ∂Ω), it holds that Substituting back into (3.6)yields It follows from a Cauchy-Schwarz inequality and the consistency (2.14) that . It also follows from a Cauchy-Schwarz inequality, the continuous trace inequality (1.4) and the approximation properties (2.7) that and the proof follows from the elliptic regularity (3.5) and the bound g Ω ≤ 1.By a continuous trace inequality and a Poincaré-Wirtinger inequality The result holds due to the H 1 -approximation properties of the L 2 -projector [22, Lemma 1.43] which remain valid in curved domains. Analysis of the stabilisation We consider here the stabilisation bilinear form defined by however, the arguments we use to show robustness on curved meshes extend seamlessly to more general choices of stability such as those considered in [27, Section 4].It is clear that s K,T satisfies (2.13) so it remains to prove that (2.12) holds. Proof.We first note the bound ∂T which follows from the ellipticity (1.3) of K T .Consider, by a triangle inequality First, we wish to bound the term v − π 0,k F T v ∂T .As π 0,k F T is the L 2 -orthogonal projector on P k (F T ), it minimises its respective norm.Therefore, we may replace π 0,k F T v with any element of P k (F T ).In particular, as It follows from the continuous trace inequality (1.4) and a Poincaré-Wirtinger inequality that Similarly, we apply the continuous trace inequality and a Poincaré-Wirtinger inequality on the term h where we have applied a triangle inequality to reach the conclusion.It follows from [22, Equation (1.77)] (which invokes [22, Equation (1.74)] which does not rely on the elements being polytopal) T .Thus, we can conclude that The proof follows by applying the ellipticity (1.3) of K T to yield Remark 6.We note that the inclusion P 0 (F ) ⊂ P k (F ) is crucial for the bound to hold, and without this inclusion, coercivity cannot hold. Lemma 8 (Coercivity). It holds for all Proof.It follows from the definition (2.10) of where we have added and subtracted π 0,k T p k+1 K,T v T to the volumetric term, and π 0,k T p k+1 K,T v T and π 0,k F T p k+1 K,T v T to the boundary term, and invoked triangle inequalities to reach the conclusion.Similar to the proof of Lemma 7, we apply the continuous trace inequality (1.4), a Poincaré-Wirtinger inequality (due to the zero mean value of π 0,k We can conclude from Lemma 7 that which combined with the definition of a K,T yields the result. Lemma 9 (Boundedness). It holds for all Proof.Consider by a triangle inequality and Lemma 7 Thus, we need to prove that It follows from the definition (2.3) of p k+1 K,T and an integration by parts that where we have applied Cauchy-Schwarz inequalities on both inner-products and the discrete trace inequality (1.6).The proof follows by simplifying (4.5) by |p k+1 K,T v T | K,H 1 (T ) and squaring. Integration on curved domains The design of integration methods on curved domains is an active area of research.In the recent article [2] a quadrature rule for curved domains is developed by considering a decomposition into triangular or rectangular pyramids T and a mapping T : [0, 1] d → T for each decomposition. With knowledge of the Jacobian of such a mapping, integration can be performed on the preimage of each T .The article [17] develops an extension of the homogeneous integration rule developed in [16] by considering a curved triangulation of the domain and constructing a scaled boundary parameterisation on each curved triangle.Here, we also consider an extension of the homogeneous integration rule, but the approach we take is quite different.We avoid the need to split the curved domain into sub-regions and directly map the integral onto the boundary by constructing a Poincaré-type operator which inverts the divergence operator.Indeed, this operator was briefly mentioned in the appendix of [17], however, we develop the ideas here without a sub-triangulation, and independent of dimension.We begin with the formula developed in [16] to rewrite the integral onto the boundary of the element.This rule works by identifying a vector field such that ∇ • F = v for homogeneous functions v of degree q.Therefore, We would like to extend this rule to non-homogeneous functions.We begin by searching for a vector field of the form F = g r, such that ∇ • F = v where r denotes the unit vector in the radial direction.We find that the unknown function g must satisfy 1 where we denote by r = |x|.A solution is given by Thus, we have found an inverse divergence Therefore, an integral over the element T can be rewritten to its boundary as follows: We note that if v is a homogeneous function of degree q (that is, v(tx) = t q v(x)), then the inverse divergence formulae (5.1) and ( 5.3) coincide and thus so do the rules (5.2) and (5.4). In this sense, the method can be considered an extension of the homogeneous integration rule developed in [16]. If we instead consider a vector field of the form F = g r0 where r0 is the unit radial direction from a shifted origin x 0 , we arrive at the more general formula ( Therefore, we may write This is very useful if the element contains one or more planar faces.For a vertex with coordinates ν, we can set x 0 = ν and it holds that (x − ν) • n T = 0 on any planar faces connected to the vertex ν.We note that if T is not star-shaped with respect to ν, then the integral 1 0 t d−1 v(tx + (1 − t)x 0 ) dt will pass through points outside of T .Thus, one would require a sufficiently smooth extension of v outside of T .However, for polynomials or functions analytic over Ω (such as an analytic source term), such an extension is trivial. A quadrature rule for curved edges in two dimensions For a given edge E, consider a parameterisation γ E : [t 0 , t 1 ] → E, t 0 < t 1 .Therefore, integration on curved edges is trivial: The above integral can easily be approximated with a one-dimensional Gaussian quadrature rule.In particular, let w i , x i , i = 1, . . ., N be the weights and abscissae associated with a quadrature rule on [0, 1].Then we can generate weights w E i and abscissae x E i on the edge E as follows: In practise, we generally store an arc length parameterisation for each edge and thus the term |γ E | is not required. A quadrature rule for elements in two dimensions In two dimensions the faces are edges and thus the boundary integral in (5.6) can be evaluated on each edge F ∈ F T using the rule described in (5.7).We let w F i and x F i , i = 1, . . ., N be the quadrature weights and abscissae associated with an edge F ∈ F T and w j , x j , j = 1, . . ., M be the weights and abscissae associated with a quadrature rule on [0, 1].We set ν to be the coordinate of a vertex of T connected to the highest number of straight edges in T .We then consider the quadrature rule That is, we store weights and abscissae x j x F i + (1 − x j )ν, for each i = 1, . . ., N , j = 1, . . ., M and on each edge F ∈ F T that is not a straight edge connected to the vertex ν. If T is polygonal, then there always exists two straight edges connected to a vertex ν.Thus, the rule described by (5.8) consists of (|F T | − 2)N M quadrature points.If we consider a Gauss-Legendre rule on each edge which is exact for polynomials of degree k, then we require to take N = k+1 2 .However, for the inverse divergence formula (5.5) to reproduce polynomials of degree k exactly, we require to take M = k+2 2 due to the presence of the multiplier t. quadrature points.We note this is a slightly larger number of quadrature points than the usual (|F T | − 2) k+1 2 2 required by splitting the polygon T into (|F T | − 2) sub-triangles (an optimal sub-triangulation) and considering a Gauss-Legendre rule on each sub-triangle.However, (5.8) avoids the complex process of generating such a sub-triangulation.To avoid these additional quadrature points, one would need to consider a Gauss-Legendre rule on each edge, but a weighted Gaussian rule with the weight function w(t) = t for the integral (5.5).This is not explored further here. A quadrature rule for elements in three dimensions In three dimensions, a volumetric integral can be mapped onto the faces as follows, (5.9) Thus, given a quadrature rule for each face F ∈ F T , a quadrature rule for the element T can be developed analogously to the two-dimensional case.On each face x 0 ) dt.Take the planar region F ⊂ R 2 with (potentially curved) edges Ê F and a parameterisation γ F : F → F .It holds that where J( x) = det J t ( x)J ( x) and J is the Jacobian matrix of the map γ F .It then follows from (5.6) that (5.10) where n F Ê denotes the unit normal directed out of F and towards Ê.Therefore, given the parameterisation γ F and a parameterisation of each mapped edge Ê ∈ Ê F , the integral (5.10) can be evaluated analogously to the 2D case (5.8). A note on planar faces If the face F is planar, one can follow a procedure similar to that in [3] to rewrite the integrals on each face onto the edges E ∈ E F .We take γ F ( x) = x F + E x where x F is a point in the face F and E is an orthonormal matrix.Then it holds that J( x) ≡ 1 and Thus, we can map the integral (5.10) back to the edges of the face F as follows, However, as E is orthonormal it preserves distance and therefore it holds that (γ −1 where n F E denotes the unit normal directed out of F and towards E.Moreover, the mapping γ F is onto, so we can choose x0 such that γ F ( x0 ) = x F,0 for an arbitrary point x F,0 ∈ F .Therefore Again, we may choose x F,0 to be the vertex of the face F connected to the largest number of straight edges.The integral (5.11) is then evaluated in an identical manner as two-dimensional elements. Implementation The HHO method for curved edges is implemented using the open source C++ library PolyMesh [39].We generate curved meshes by first considering uniform Cartesian meshes and 'cutting' along a curve.The integrals are computed using the quadrature rule described by (5.8) where we take the one-dimensional integration rules to be Gauss-Legendre rules of degree 30. A basis is formed for the space P k (F ) by first generating a spanning set by considering a canonical basis of P k (Ω) d and taking P 0 (F ) + P k (Ω) d • n F .The linearly dependent basis functions are removed algebraically using the FullPivLU class found in the Eigen library, with documentation available at https://eigen.tuxfamily.org/dox/classEigen_1_1FullPivLU.html.This requires a threshold to be set which determines the point at which pivots are considered to be numerically zero.We set this value to 10 −15 .We note that for sufficiently small h and large k this can result in certain linearly independent functions being removed from P k (F ).However, as these functions are 'close' to being linearly dependent, the method seems unaffected by their removal.The bases of both P k (F ) and P k (T ) are orthonormalised via a Gram-Schmidt process. The relative error of the scheme is measured through the following three quantities: , where the norm • L 2 (T h ) is defined as the square-root of the sum of squares of • L 2 (T ) .We note that if the mesh conforms to the domain Ω then v L 2 (T h ) = v Ω for all v ∈ L 2 (Ω). We consider here two sequence of meshes of the domain Ω.The curved meshes use an exact representation of the boundary, whereas the straight meshes take a piece-wise linear approximation of the boundary.The parameters of the mesh sequences are displayed in Table 1.Both sequences of meshes have the same parameters.Example curved meshes are plotted in Figure 1 and straight meshes are plotted in Figure 2 In Figure 3 we test both a curved HHO scheme and a classical HHO scheme (on straight meshes) with polynomial degrees given by k = 1 and k = 3.In both cases the curved HHO scheme on the fitted mesh observes significantly better convergence rates than the classical scheme on the straight mesh.While the scheme appears to converge optimally on curved meshes, it converges at most order 2 on straight meshes. We consider Ω = {(x, y) : x 2 + y 2 < 1} to be the unit disc and K a piece-wise constant diffusion tensor given by . We take R = 0.8, β 1 = 10 −6 and β 2 = 1 which corresponds to anisotropic diffusion in the region r < R, and a Poisson problem in r > R. We take the source term to be f ≡ 1.Again, we consider two sequences of meshes of the domain Ω.We take both sequences to fit the domain Ω exactly, however, the curved mesh we take to fit the discontinuity in K exactly and the straight mesh takes a piece-wise linear approximation of K.The mesh data is presented in Table 2.We note that both sequences of meshes have the same parameters.An example curved mesh and an example straight mesh is plotted in Figure 5.As we do not know the exact solution to this problem, we run the scheme on the finest curved mesh with k = 7.We denote by the discrete solution to this problem p k+1 K,h u h = u * h , which will play the role of the 'exact' solution.We measure the quantities Ω u * h ≈ 0.46006947 ; |u * h | H 1 (T h ) ≈ 0.80699766. Mesh We would then like to test the performance of the scheme on coarser meshes (both curved and straight) with smaller k by investigating the behaviour of We are less interested in the rate of convergence of these measures, but rather want to observe steady convergence, and investigate the difference between the two schemes.In Figure 6 we plot the quantities E 1 and E 2 against increasing polynomial degree k where we fix the mesh to be Mesh 1.It is clear that the scheme on the straight mesh, where we consider an approximate diffusion tensor, stops converging for k > 1 whereas the scheme on the curved mesh converges smoothly.In Figure 7 we test against decreasing mesh size h for polynomial degrees k.While for k = 1 the order of E 1 and E 2 are similar for both schemes (and at times smaller on the straight mesh), the convergence is much smoother on the curved mesh.For k = 3, the values are significantly smaller for the curved mesh, and the behaviour for the straight mesh does not differ much from the k = 1 case.This coincides with the previous observations that increasing k past 1 has little effect on the scheme when considering a piece-wise linear approximation of the discontinuity in the diffusion. Straight mesh Curved mesh Finally, in Figure 8 we show contour plots of the potential reconstructions of the discrete solutions on Mesh 1 with k = 7.We observe that the plot on the straight mesh seems to be distorted along the eigen vectors of K (that is, (1, 1) t and (1, −1) t ) when compared to the plot on the curved mesh.We also plot the absolute value of the difference between the two schemes and observe that this value seems to be of greatest magnitude around the discontinuity in the diffusion tensor. )Remark 5 . where the seminorm | • | H s (T h ) is defined as the square-root of the sum of squares of | • | H s (T ) for any s ∈ N. The L 2 -error estimate is stated with identity diffusion, corresponding to a Poisson problem.However, the result follows trivially (with a hidden constant depending additionally on the anisotropy of K) for any constant diffusion tensor K [cf.22, Remark 3.21]. Figure 1 :Figure 2 : Figure 1: Example curved meshes used for the curved boundary test Figure 5 : Figure 5: Example meshes used for the heterogeneous diffusion test Table 1 : . Parameters of the mesh sequences used for the curved boundary test Table 2 : Parameters of the mesh sequences used for the heterogeneous diffusion test
8,253
2022-12-11T00:00:00.000
[ "Computer Science", "Mathematics" ]
Uncertainty and the welfare economics of medical care : an Austrian rebuttal : part 1 Corresponding author: Gilbert Berdine MD Contact Information<EMAIL_ADDRESS>DOI: 10.12746/swrccc2016.0416.221 Opponents of free market solutions to the scarcity of health care claim that health care is special and cannot be treated like a commodity. Kenneth Arrow is frequently cited as proof that special features of health care violate assumptions that free market economics are based on. An example can be found in a recent debate on a single payer solution to health care.1-4 In their rebuttal, my opponents of free market solutions claimed: In his first section, Kenneth Arrow defines what he means by a free market, asserts that health care has special features incompatible with his definition of a free market, and concludes that various uncertainties require non-market interventions, such as government subsidy, to achieve optimal or efficient results.Before considering Arrow's specific analysis, I would like to consider whether this general method is valid.Consider the following equation: Every physician should recognize that this equation is the Ideal Gas Equation.There are no ideal gases in nature, so why do we bother learning about ideal gases that do not exist?The utility of the Ideal Gas Equation is that it leads to predictions about how gases behave.Whether or not computations based on the equation give answers that differ from actual measurements at the 6 th decimal point is not what determines the utility of the equation.The utility of the equation is that general principles are easy to grasp and the equation allows predictions that are good approximations of real gas behavior.The advanced student can learn the corrections for non-ideal behavior due to factors such as the non-zero volume of molecules.Demonstration that a real gas, such as oxygen, violates the assumptions of an ideal gas does not void the utility of the equation or its applicability to the behavior of oxygen.Likewise, a demonstration that a market for health care contains non-ideal features does not invalidate analysis based on ideal free markets.One would have to demonstrate that the deviations from ideal behavior are so great as to make qualitative predictions impossible. The Austrian definition of a free market is that all exchanges are voluntary; no coercion is involved.While this definition may be violated by existing health care in the United States, such as the mandatory provisions of the Affordable Care Act (ACA), there is no intrinsic feature of health care that make Austrian free markets impossible in the United States.Only if one stipulates that health care is a right rather than a scarce resource can health care be considered to be incompatible with the Austrian definition of a free market.I will deal with that issue near the end of this discussion. While the Austrian definition of a free market is very simple, Kenneth Arrow's definition is complicated.Arrow defines a free market based on a competitive model. "The focus of discussion will be on the way the operation of the medical-care industry and the efficacy with which it satisfies the needs of society differ from a norm, if at all.The "norm" that the economist usually uses for the purposes of such comparisons is the operation of a competitive model …" 5 Kenneth Arrow then defines a competitive model."that is, the flows of services that would be offered and purchased and the prices that would be paid for them if each individual in the market offered or purchased services at the going prices as if his decisions had no influence over them, and the going prices were such that the amounts of services which were available equaled the total amounts which other individuals were willing to purchase, with no imposed restrictions on supply or demand." 5is definition appears to be a fantasy even for fungible commodities as the Law of Marginal Utility dictates that each transaction will affect the price.Be that as it may, Kenneth Arrow moves on to a discussion of Pareto Optimality. "If a competitive equilibrium exists at all, and if all commodities relevant to costs or utilities are in fact priced in the market, then the equilibrium is necessarily optimal in the following precise sense (due to V. Pareto): There is no other allocation of resources to services which will make all participants in the market better off." 5 There are subtle differences between the Austrian and mainstream viewpoints.The mainstream view is that each transaction moves towards a Pareto Optimal point, while Austrians believe that each transaction clears the market at a Pareto Optimal point.The distinction is important, but it does not affect the discussion of this topic.Both viewpoints believe that the market achieves a condition in which all exchanges make all participants better off; there are neither unsatisfied buyers nor sellers.The following illustration should clarify this concept.All schools of economics accept this figure.The supply curve has positive slope.As price increases more goods are offered for sale.The demand curve has negative slope.As price increases fewer bids for purchase will be extended.The two curves must intersect and the point of intersection is called the market clearing point or market clearing price.At the market clearing price there are neither unsatisfied buyers nor sellers.Note, however, that not everyone makes a purchase or sells something.Some buyers refuse to purchase because they value the price more than the good.In other words, they have a higher priority for the money.Some sellers decline to sell; they value the good more than the price.In other words, the good has a higher priority to them than anything else that the money could buy.Finally, at any price above or below the market clearing price, there would be unsatisfied buyers or sellers.At a price above the market clearing price, there are sellers willing to supply, but they cannot find a buyer at that price.At a price Gilbert Berdine Medical care: an Austrian rebuttal below the market clearing price, there are buyers willing to buy, but they cannot find a seller at that price. The following text is probably the crux of the argument as it provides the justification for deviating from Pareto Optimality. "It is reasonable enough to assert that a change in allocation which makes all participants better off is one that certainly should be made;" 5 This seems to be a tautology and irrefutable.However, Kenneth Arrow will disagree before the sentence is completed."this is a value judgment, not a descriptive proposition, but it is a very weak one." 5 The above assertion that the value judgement of optimality is a weak one seems counterintuitive and is offered without any justification.Apparently Kenneth Arrow considered it to be self-evident. "We cannot indeed make a change that does not hurt someone; but we can still desire to change to another allocation if the change makes enough participants better off and by so much that we feel that the injury to others is not enough to offset the benefit." 5 This is a rather bold assertion of how elites view Utilitarianism.Elites believe that it is acceptable to make others worse off as long as there is some benefit to a favored third party.The group being harmed is rarely if ever consulted about their opinion on the matter.We see this thinking with ACA.Healthy people have been mandated to subsidize the sick members of the group.The elites cannot understand why the healthy are not be happy with that arrangement which is why they failed to predict what was obvious to Austrians: ACA enrollment would be skewed towards high cost sick patients leading to losses by insurers and ever increasing premiums which would skew the enrollment even more. 6nneth Arrow then provides an intellectual justification for subsidies to achieve a desired distribution of health care. "For any given distribution of purchasing power, the market will, under the assumptions made, achieve a competitive equilibrium which is necessarily optimal; and any optimal state is a competitive equilibrium corresponding to some distribution of purchasing power, so that any desired optimal state can be achieved." 5 is true that the market clearing price depends on the initial distribution of goods and money.It is also true that different starting conditions will, in general, lead to a different market clearing price.It is far from clear, however, that any market clearing price and quantity can be achieved by rearranging the starting conditions.This is another assertion provided without any evidence or justification that, apparently, Kenneth Arrow considered to be self-evident. "The redistribution of purchasing power among individuals most simply takes the form of money: taxes and subsidies.The implications of such a transfer for individual satisfactions are, in general, not known in advance.But we can assume that society can ex post judge the distribution of satisfactions and, if deemed unsatisfactory, take steps to correct it by subsequent transfers.Thus, by successive approximations, a most preferred social state can be achieved, with resource allocation being handled by the market and public policy confined to the redistribution of money income." 5re we have the Progressive policy of achieving desired market results through subsidies and taxes.Kenneth Arrow admits that the results of the policy will not be known in advance.That part of his paper seems to be conveniently ignored by opponents of the Free Market.Kenneth Arrow assumed, without any justification or proof, that each iteration would get progressively closer to the final goal.The reality is that each iteration of policy made things worse and the subsidies made health care less affordable for everyone.At the time -1963 -90% of senior citizens could afford to pay their total health care costs out of pocket. 7After 50 years of tweaking and adjusting subsidies through Medicare, Medicaid, Medicare Part D and ACA, nobody can afford health care in the United States. Gilbert Berdine Medical care: an Austrian rebuttal We see that Kenneth Arrow's prescription for a health care system that followed market principles would be the kinds of subsidies and taxes that were enacted with Medicare, Medicaid and ACA.It is ironic that the next section of the paper is purported to prove that the scarcity of health care cannot be most efficiently handled by the market, because that condition would require the government to become a direct provider of health care.Kenneth Arrow concludes this introductory section with a discussion of risk and risk transfer. "The instance of nonmarketability with which we shall be most concerned is that of risk-bearing.The relevance of risk-bearing to medical care seems obvious; illness is to a considerable extent an unpredictable phenomenon.The ability to shift the risks of illness to others is worth a price which many are willing to pay." 5 Some clarification of this statement is necessary because many people now conflate insurance with subsidy.It is true that illness is unpredictable, but that does not make it a non-marketable commodity.While in some cases it is desirable to shift risk onto others, this is not the case for all aspects of health care.I do not know in advance when I will need a Band-Aid, but that does not prevent a robust market for Band-Aids from existing.The current price of Band-Aids is about 6 cents per Band-Aid, so I keep a box of them handy for future needs. "Nevertheless, as we shall see in greater detail, a great many risks are not covered, and indeed the markets for the services of risk-coverage are poorly developed or nonexistent." 5nneth Arrow maintains that the unavailability of health insurance for all people is proof that health care is not a marketable commodity and that markets cannot satisfactorily distribute health care.Continuing my Band-Aid example, it would be silly to expect insurance to cover Band-Aids as the cost of administering the program would be much greater than the cost of the Band-Aid.Malcolm Bird discovered this when he took his 1-year-old daughter to the emergency room (ER). 8The treatment consisted of cleaning the finger and applying a Band-Aid.The bill was $629.The hospital justified the outrageous price as being "only" $7 for the actual Band-Aid and over $400 for the "service" fee.The $7 charge for the Band-Aid was only 100 times the market price of a Band-Aid.Was the $400 service fee reasonable?By comparison, Medicare allows a pulmonary specialist (me) $139.58 for a level 5 (maximally complex) follow-up visit.There is a good reason that insurance does not cover Band-Aids and non-marketability has nothing to do with it. Catastrophic yet unpredictable events are insurable and a robust insurance system existed before it was systematically destroyed by mandatory expansion of coverage to uninsurable conditions.Motor vehicle accidents are, for the most part, unpredictable.Developing acute leukemia is unpredictable.These very expensive events can be insured for pennies on the dollar because people are willing to accept risks on an actuarially sound basis that generates a profit by covering large numbers of people when only a small number will require a claim to be paid.The insurer accepts the average expected cost plus a profit; the insured makes a payment that is a premium to the actual risk. The insured willingly pay the premium to the risk for two reasons.The first is that they are not required to set aside the large cost of treatment in the event of a disaster.In the case of the Band-Aid, the cost is low, so people just set that cost aside to be prepared, but this is not practical for rare and catastrophic events.The second reason is that buying insurance ahead of time avoids the problem of having a very poor bargaining position when you are in urgent need of a service.Your negotiating position to purchase health care is much more advantageous when you do not need it at the moment.Insurance avoids the problem of price gouging when emergencies arise. Insurance can only cover insurable events.Not all health care services are insurable.Pre-existing conditions are not insurable.If someone has end stage renal disease, their health care has predictable
3,190.6
2016-10-11T00:00:00.000
[ "Economics", "Medicine" ]
Characterization and Compensation of the Residual Chirp in a Mach-Zehnder-Type Electro-Optical Intensity Modulator We utilize various techniques to characterize the residual phase modulation of a fiber-based Mach-Zehnder electro-optical intensity modulator. A heterodyne technique is used to directly measure the phase change due to a given change in intensity, thereby determining the chirp parameter of the device. This chirp parameter is also measured by examining the ratio of sidebands for sinusoidal amplitude modulation. Finally, the frequency chirp caused by an intensity pulse on the nanosecond time scale is measured via the heterodyne signal. We show that this chirp can be largely compensated with a separate phase modulator. The various measurements of the chirp parameter are in reasonable agreement. Introduction Intensity modulators are important components of high-speed fiber-optic communication systems. They have also proven useful as fast switches in a variety of laser-based experiments in atomic and molecular physics. Of particular interest are the waveguide-based Mach-Zehndertype electro-optic modulators. They have high speed, good extinction, and generally require low drive voltages. However, when used to modulate the intensity, there can be an accompanying residual phase modulation, or equivalently, a residual frequency chirp. The extent of this phase modulation for a given intensity modulation is quantified by the chirp parameter. For many applications, this residual chirp is undesirable. For example, in dense wavelengthdivision-multiplexing (DWDM) systems, chirp can lead to crosstalk between adjacent channels. In other situations, the chirp can be beneficial. For example, pulses with chirp have been used for adiabatic excitation/de-excitation in atom optics [1,2] and ultracold collision experiments [3]. For these applications, excitation efficiencies can depend on the detailed temporal variations of frequency and intensity, a situation where the techniques of coherent control may be fruitfully applied. In all cases, it is important to at least characterize, and possibly control, this chirp. Here we examine a particular intensity modulator and measure its chirp parameter using a variety of techniques. These rather distinct methods give consistent results. We also show that it is possible to largely compensate this residual chirp with a separate phase modulator. A number of techniques have been utilized to characterize the performance of Mach-Zehnder electro-optic intensity modulators and related devices, such as electroabsorption modulators. Sending the modulated light through a dispersive fiber, the frequency response was recorded with a network analyzer in order to obtain the chirp parameter [4,5]. Stretched ultrafast pulses were modulated and then heterodyned with a delayed probe to measure the complex temporal response [6]. Frequency resolved optical gating has been used to characterize the intensity and phase properties of a high-speed modulator [7]. A Mach-Zehnder modulator, driven by a delayed subharmonic, was employed to measure the chirp parameter of a separate modulated source [8]. A Mach-Zehnder interferometer was utilized as an optical frequency discriminator to examine the chirp of a modulated source [9,10]. Specific modulation sidebands were selected with a tunable filter and their phase shifts compared to yield the chirp parameter [11]. Analyzing the ratios of modulation sidebands, as we discuss in Sect. 4, has been used in a number of variations to determine the chirp parameter [12][13][14][15][16][17][18]. Finally, various homodyne and heterodyne techniques have been employed to map out the amplitude and phase transfer functions of optical modulators [19][20][21]. Our technique is novel in that we use an optical heterodyne set-up to directly measure the phase shift as a function of modulation voltage under essentially static conditions. This yields directly the intrinsic chirp parameter. We also use the sideband ratio technique for comparison purposes. Finally, we measure via heterodyne the frequency chirp induced by a short intensity pulse and compare to the expected shape. The paper is organized as follows. In Sect. 2, we examine the operating principle of a Mach-Zehnder intensity modulator and the origin of residual phase modulation. In Sect. 3, we describe our heterodyne measurements of phase shift as a function of applied voltage. Our use of the sideband ratio technique is presented in Sect. 4. The short-pulse chirp measurements are discussed in Sect. 5. Sect. 6 comprises concluding remarks. Residual Phase Modulation and the Chirp Parameter The operating principle of a waveguide-based Mach-Zehnder-type electro-optic intensity modulator is shown in Fig. 1. Light from an optical fiber is coupled into a waveguide and then equally split into two paths which form the arms of a Mach-Zehnder interferometer. Light from these two arms is then recombined and coupled out to another fiber. The waveguides are made from lithium niobate, an electro-optic material, so that when a voltage is applied, a phase change is induced. To modulate the intensity, a controllable phase difference between the two arms is required. In the ideal intensity modulator, an equal but opposite phase would be induced in each arm, so that the phase of the output light is not modified. Such "X-cut" devices are available, but generally have higher insertion loss and require somewhat higher drive voltages because of the larger electrode-waveguide spacing. A "Z-cut" device, such as we use, has the electrode closer to one of the waveguides, causing an asymmetry between the two arms and therefore a phase modulation of the output when the intensity is modulated. Fig. 1. Schematic of Mach-Zehnder-type intensity modulator. The incident beam is equally split into two arms, 1 and 2, and recombined to provide the output. The phase difference between the two arms, controlled by the rf electrode, determines the output power via interference. 1 For our purposes, the important parameters of the Mach-Zehnder (MZ) intensity modulator (IM) are the voltage-to-phase conversion coefficients for the two arms, γ 1 and γ 2 , which are assumed to be constant with respect to the applied modulation voltage V(t). Assuming that the input field of amplitude E 0 and frequency ω 0 is equally split (without loss) between the two arms, the output field can be written as Here, ϕ 01 and ϕ 02 are the static phases for each arm. With no modulation applied, a dc bias voltage controls the static phase difference ∆ϕ 0 = ϕ 01 -ϕ 02 and thus determines the output level. In the presence of modulation, the output field can be expressed in terms of the time-dependent phase difference and the time-dependent output phase as Since the device is an interferometer, the phase difference determines the ratio of output power to input power (assuming no loss) The voltage change required to go from minimum to maximum output power is given by V π = π/(γ 1 -γ 2 ). This corresponds to a change in ∆ϕ of π/2. The time-dependent frequency is the time derivative of the output phase As can be seen from Eqs. (5) and (6), if γ 2 = -γ 1 , we have pure intensity modulation with no phase modulation, while if γ 2 = γ 1 , we have pure phase modulation. Of course, in an actual device, the situation will be somewhere in between, as characterized by the intrinsic chirp parameter [13,18] This parameter is the ratio of the time derivative of the output phase ϕ(t) (responsible for phase modulation) to the time derivative of the phase difference (responsible for intensity modulation). The case of α 0 = 0 corresponds to pure intensity modulation, while α 0 = ∞ corresponds to pure phase modulation. For modulation in only one arm of the MZ interferometer, α 0 = 1. Note that this intrinsic chirp parameter α 0 is not the same as the often-used intensity-dependent chirp parameter [22] (8) In the specific case where the power is modulated with small amplitude about P 0 /2, i.e., when ∆ϕ 0 = -π/2, then α reduces to α 0 [13]. Direct phase measurement We use two main methods to characterize the residual phase modulation of the EO Space AZ-0K5-05-PFA-PFA-790 intensity modulator: optical heterodyne and spectral analysis. For both techniques, the light source is a 780 nm external-cavity diode laser (ECDL) [23] with a linewidth of ∼ 1 MHz. The heterodyne set-up used for the direct phase shift measurement is shown in Fig. 2. The idea is to combine the modulator output with a fixed frequency reference beam and measure the resulting beat signal on a 2 GHz photodiode (Thorlabs SV2-FC) connected to a 2 GHz digital oscilloscope (Agilent Infiniium 54852A DSO). The procedure consists of stepping the modulation voltage, which varies the output intensity, and measuring the voltage-dependent phase shift of the heterodyne signal. After each step, the voltage is fixed, so the frequency of the output light is equal to ω 0 . However, the phase of the output light varies with the modulation voltage (Eq. 3), so the phase of the heterodyne signal will shift after each step. The pattern of voltage steps is controlled with a Tektronix AFG 3252 240 MHz arbitrary waveform generator (AWG). The resulting intensity pattern is shown in Fig. 3a. Note that intensity is not directly proportional to voltage, as indicated in Eq. 5. Fig. 2. Set-up for measuring the output phase of the intensity modulator (IM) as the drive voltage is varied. The horizontally polarized output of the IM is combined on a polarizing beamsplitter cube (PBS) with a vertically polarized reference beam, which is derived from the input beam by frequency shifting a total of 160 MHz with two 80 MHz acousto-optical modulators (AOMs). The combined beams, whose relative intensities are adjusted with a linear polarizer (LP), produce a heterodyne signal on the photodiode (PD). The drive voltage is controlled with an arbitrary waveform generator (AWG). The reference beam for the heterodyne is generated by frequency shifting the input light by a pair of acousto-optic modulators (AOMs) to give a beat frequency of 160 MHz. Deriving the reference beam from the modulator input has the advantage of making the heterodyne signal immune to common-mode frequency fluctuations. However, since we are measuring phase, we are sensitive to path length variations between the reference and signal beams. Therefore, the entire voltage waveform is completed in 2 µs, which is fast compared to time scale of vibrations and thermal drifts of optics in the heterodyne path. The time spent between voltage steps (approximately 130 ns) is many heterodyne periods, thus allowing an accurate determination of the phase. An example of a heterodyne signal, together with the corresponding sinusoidal fit, is shown in the inset of Fig. 3a. In order to further reduce the sensitivity to slow phase drifts, for the waveform shown in Fig. 3a, we return to the power level P 0 /2 (V = V π /2) after each step and measure phase shifts relative to the phase at this reference power (voltage) level. This local reference phase is determined by a single sinusoidal fit to the central 100 ns of the reference intervals before and after each interval of interest. In this fit, the amplitude, frequency, offset, and phase are free parameters. A similar fit is done in the interval of interest, but with the frequency now fixed at the value from the reference fit. The important parameter is the phase shift in the interval of interest. This phase shift as a function of voltage is shown in Fig. 3b. Since the abscissa of Fig. 3b is V/V π = π(∆ϕ) (from Eq. 2), and the ordinate is the change in output phase ϕ (from Eq. 3) divided by π/2, the slope of this straight line gives directly the intrinsic chirp parameter α 0 . Averaging together the results from 45 repetitions, taken from three different voltage step patterns, we obtain α 0 = 0.86 (2), where the uncertainty is primarily statistical. Because the voltage waveform is completed in only 2 µs, and each phase measurement is sandwiched between two reference intervals, the uncertainty in each phase shift measurement is minimized. Variations in the reference phase are <0.01 rad for each phase shift measurement. The value of V π , which is needed for the determination of α 0 discussed above, is obtained in a separate measurement. A slow, large amplitude sinusoidal modulation is applied to the modulator and the output power P is monitored. If the peak-to-peak voltage excursion is V π , and the bias voltage is set to give ∆ϕ 0 = π/2, then P swings between 0 and P 0 . If the voltage excursion exceeds V π , then P "wraps around" near its maxima and minima. For a peak-to-peak excursion of 2V π , the output powers at the maximum and minimum voltages meet at P 0 /2. If the bias voltage is slightly off from ∆ϕ 0 = π/2, these output powers still meet at a common value. This method gives a rather precise measure of V π for two reasons: 1) reduced sensitivity to bias voltage; and 2) at the point where they match, the powers depend linearly on the voltage amplitude, so locating this point is easier than locating an extremum. Using this technique, we determine V π = 1.58(1) V at the slow modulation frequency of approximately 1 MHz. This is consistent with the specified value of 1.6 V at 1 GHz for our device. Determining the chirp parameter from modulation sidebands The second method for measuring α 0 involves sinusoidal modulation of the intensity and analysis of the resulting sidebands [12][13][14][15][16][17][18]. Deviations of the sideband ratios from those expected for pure intensity modulation are indicative of residual phase modulation. Following the treatment of Bakos, et al. [18], we apply sinusoidal modulation V = V 0 sin(ωt) to Eq. 1 and obtain an output field where a 1,2 = V 0 · γ 1,2 . This can be Fourier decomposed into Bessel function sidebands For a fixed value of ∆ϕ 0 , set by the bias voltage, we can measure the intensity ratio of adjacent sidebands: r n,n+1 = |J n (a 1 )e iϕ 01 + J n (a 2 )e iϕ 02 | 2 |J n+1 (a 1 )e iϕ 01 + J n+1 (a 2 )e iϕ 02 | 2 = J 2 n (a 1 ) + J 2 n (a 2 ) + 2J n (a 1 )J n (a 2 )cos(∆ϕ 0 ) J 2 n+1 (a 1 ) + J 2 n+1 (a 2 ) + 2J n+1 (a 1 )J n+1 (a 2 )cos(∆ϕ 0 ) (11) We measure the intensities of the carrier (n = 0) and the first three sidebands (n = 1, 2, 3), thus obtaining three independent ratios. Since we only have two unknowns, a 1 and a 2 , the system is over-determined. For simplicity, we take ∆ϕ 0 = 0, the value we use in the experiment, and define the ratio β = a 2 /a 1 = γ 2 /γ 1 so that Eq. 11 simplifies to For a given pair of sidebands, we define ∆ n,n+1 = (J n (a 1 ) + J n (β a 1 )) 2 − r n,n+1 (J n+1 (a 1 ) + J n+1 (β a 1 )) 2 (13) which should be equal to zero for the correct values of a 1 and β . To find these values, the three expressions for ∆ n,n+1 (for n = 0, 1, 2) are plotted as functions of a 1 and β , and the common point where all three go to zero simultaneously is determined. Once β is determined, we can use Eq. 7 to calculate the intrinsic chirp parameter: The measurement of sideband ratios is relatively straightforward. The modulator is biased at ∆ϕ 0 = 0 (P = P 0 ) and a sinusoidal modulation of amplitude V 0 = 3 V and frequency ω/(2π) = 240 MHz is applied. The spectrum is observed with a scanning Fabry-Perot interferometer (Coherent 33-6586-001) with 7.5 GHz free spectral range. A typical spectrum, showing only the carrier and first three positive sidebands, is shown in Fig. 4. Applying the above procedure to the measured sideband ratios yields α 0 = 0.81(1), where the uncertainty is due mainly to the 5% uncertainty in measuring the height of each sideband. The sidebands are well resolved so crosstalk between them is negligible. Chirp caused by an intensity pulse Since we are ultimately interested in using the intensity modulator to produce a specified pulse on the nanosecond time scale, we need to know the frequency chirp under these conditions. To measure the chirp, we use a variation of the heterodyne set-up described in Sect. 3. This is shown in Fig. 5. Since we are measuring on faster time scales, we need a higher beat frequency (e.g., 2 GHz), so we use a separate external-cavity diode laser for the reference beam. Also, for the chirp compensation discussed below, we add an electro-optic phase modulator (EO Space PM-0K1-00-PFA-PFA-790-S) prior to the intensity modulator. The voltage pulse driving the intensity modulator is generated by the AWG. To facilitate the diagnostics, we use a negative-going intensity pulse. This yields a strong heterodyne signal everywhere except at the very center of the pulse, allowing an accurate determination of the unshifted beat frequency. With a Gaussian voltage pulse programmed into the AWG, the resulting voltage output, together with the corresponding 2.55 ns FWHM Gaussian fit, are shown in Fig. 6a. Aside from some ringing on the trailing edge of the pulse due to the finite speed of the AWG, the fit is very good. Applying this voltage pulse to the intensity modulator yields the output intensity shown in Fig. 6b. Because the intensity modulator is an interferometer, and the phase difference in the two arms is proportional to the applied voltage, the output intensity is not directly proportional to voltage, as indicated in Eq. 5. Using the Gaussian derived from Fig. 6a, we fit this intensity pulse to Eq. 5. Because of timing delays between the electronic and optical signals, the centering of the Gaussian is allowed to be free parameter. The amplitude (fractional dip) is also a free parameter, but its value of 85.7% is consistent with the value of 86.6% predicted from Eq. 5, knowing the peak voltage from Fig. 6a and value of V π . Once again, the overall fit is quite good. The electronic ringing seen in the voltage pulse is seen to carry through to the intensity pulse. Although this intensity pulse is not a Gaussian, such a pulse, or indeed any arbitrary pulse shape, can easily be realized by appropriately programming the AWG. Pulse widths are limited by the finite AWG bandwidth of 240 MHz (2 Gsamples/s). The frequency chirp produced by the application of the Gaussian pulse to the intensity modulator is shown in Fig. 6c. The local frequency of the heterodyne signal is determined by measuring the period as the time interval between successive maxima and between successive minima. The overall heterodyne frequency of approximately 1.9 GHz, determined to within ±2 MHz from the pre-pulse heterodyne signal, is subtracted from the measured frequencies. As mentioned above, this large offset, in conjunction with interpolation of the data, allows us to measure how the frequency is changing on a time scale significantly faster than the width of the pulse. Also shown in Fig. 6c (solid curve) is the chirp expected for the applied voltage pulse V(t) of Fig. 6a. Since the residual phase change is proportional to the voltage, and the frequency is the time derivative of the phase (Eq. 6), the frequency change will be proportional to the derivative of the Gaussian voltage pulse: The solid curve in Fig. 6c is Eq. 15 with V π = 1.58 and α 0 = 0.86, as determined in Sect. 3. Except for the ringing on the trailing edge, the agreement is quite good. The maximum chirp observed is -67 MHz/ns and the peak-to-peak frequency deviation is 151 MHz. In Fig. 6d, we demonstrate compensation of the residual IM frequency chirp using the phase modulator (PM). The PM is a similar device to the IM, but has only a single waveguide and is therefore not an interferometer. Its output phase is shifted in proportion to the input voltage, so we expect to be able to compensate residual phase modulation from the IM by applying the same voltage pulse shape to the PM. For the compensated curve in Fig. 6d, a Gaussian signal from a separate channel of the AWG is applied to the PM and its amplitude and width are adjusted to minimize the peak-to-peak frequency excursion. With a 2.55 ns FWHM pulse applied to the IM, the optimum PM pulse is slightly narrower, 2.37 ns FWHM. We believe that these small variations in pulse shape are due to the slightly different electrical responses of the IM and the PM. Using this technique, we are able to reduce the peak-to-peak frequency modulation by more than a factor of 3. Further reduction could likely be obtained by optimizing the shape of the signal applied to the PM through a genetic algorithm. Conclusion We have investigated the residual frequency chirp from a Mach-Zehnder-type electro-optic intensity modulator. The most direct technique, using optical heterodyne to measure the output phase shift as a function of applied voltage, yields an intrinsic chirp parameter α 0 = 0.86 (2). A less direct measurement, based on sideband ratios for sinusoidal modulation, gives a slightly smaller value of α 0 = 0.81(1). The first measurement is essentially at DC, while the second is at 240 MHz. Both of these values are marginally consistent with the value of 0.72(6) measured for a similar modulator at 450 MHz with the sideband technique [18]. Since α 0 depends on the details of device fabrication, specifically on the electrode placement with respect to the two interferometer arms, it will not have a universal value. We have also examined the chirp resulting from the generation of a pulse with the intensity modulator. Heterodyne measurements show a chirp consistent with the measured parameters of the modulator. Using a separate phase modulator, we have demonstrated that the residual chirp can be partially compensated. Combining this type of intensity modulator with an arbitrary phase modulation system [24] will allow the production of arbitrary pulses with arbitrary chirps on the nanosecond time scale. This capability should prove useful for efficient excitation and coherent control in atomic and molecular systems. Acknowledgements This work was supported in part by the Chemical Sciences, Geosciences and Biosciences Division, Office of Basic Energy Sciences, U.S. Department of Energy. We thank EOSpace for technical advice regarding the intensity modulator.
5,222
2009-10-29T00:00:00.000
[ "Physics" ]
Clustering-Based Energy-Efficient Self-Healing Strategy for WSNs under Jamming Attacks The Internet of Things (IoT) is a key technology to interconnect the real and digital worlds, enabling the development of smart cities and services. The timely collection of data is essential for IoT services. In scenarios such as agriculture, industry, transportation, public safety, and health, wireless sensor networks (WSNs) play a fundamental role in fulfilling this task. However, WSNs are commonly deployed in sensitive and remote environments, thus facing the challenge of jamming attacks. Therefore, these networks need to have the ability to detect such attacks and adopt countermeasures to guarantee connectivity and operation. In this work, we propose a novel clustering-based self-healing strategy to overcome jamming attacks, in which we denominate fairness cooperation with power allocation (FCPA). The proposed strategy, aware of the presence of the jammer, clusters the network and designates a cluster head that acts as a sink node to collect information from its cluster. Then, the most convenient routes to overcome the jamming are identified and the transmit power is adjusted to the minimum value required to guarantee the reliability of each link. Finally, through the weighted use of the relays, the lifetime of each subnetwork is extended. To show the impact of each capability of FCPA, we compare it with multiple benchmarks that only partially possess these capabilities. In the proposal evaluation, we consider a WSN composed of 64 static nodes distributed in a square area. Meanwhile, to assess the impact of the jamming attack, we consider seven different locations of the attacker. All experiments started with each node’s battery full and stopped after one of these batteries was depleted. In these scenarios, FCPA outperforms all other strategies by more than 50% of the information transmitted, due to the efficient use of relay power, through the weighted balance of cooperative routes. On average, FCPA permits 967,961 kb of information transmitted and 63% of residual energy, as energy efficiency, from all the analyzed scenarios. Additionally, the proposed clustering-based self-healing strategy adapts to the change of jammer location, outperforming the rest of the strategies in terms of information transmitted and energy efficiency in all evaluated scenarios. Introduction Wireless Sensor Networks (WSN) facilitate communication, control, and understanding with the surrounding world.Consequently, the WSNs are being deployed in several environments owing to their capabilities [1].These characteristics, in combination with the possibility of the nodes being connected to the Internet, constitute the base for the Internet of Things (IoT) paradigm [2,3].A WSN integrates numerous sensors, nodes, routers, and gateways to communicate data along the network.Moreover, any node in the network can access the Internet and be managed remotely.Also, such architecture allows those authorized to reach the network to access the data handled by the nodes [4].Therefore, the integrity and reliability of the data are crucial to the objectives for which the WSN was created. A WSN deals with numerous challenges to guarantee the reliability, energy efficiency, and correctness of the data in the network [5].Due to the broadcast nature of WSN, attackers take advantage of this to weaken and compromise the network performance [6].Specifically, the attack strategies that disturb the physical layer (PHY) are detrimental to the data.These types of attacks can be categorized as eavesdropping and jamming.Both attacks aim to take advantage of the data for the convenience of the attacker.Nevertheless, self-healing techniques have been created to overcome and relieve the impact of the attackers on the integrity of the data [7]. The self-healing techniques take advantage of the properties of WSN, such as rerouting, power allocation, and cooperation, to guarantee the reliability of the network and data.This technique is closely related to the clustering techniques.Both approaches adapt the network topology to ensure reliability, energy efficiency, and data integrity, among other objectives.However, the joint use of these techniques to overcome jamming attacks to guarantee the communications in the network under attackers' scenarios is an unexplored topic.For this reason, this paper proposes a novel approach to overpower the presence of jamming attacks in several scenarios. The novel approach uses the strengths of self-healing and clustering techniques to mitigate the intrinsic drawback of each one and guarantee the communications in the network under jamming attacks.Load balance, rerouting, cooperation, and power allocation are used.Then, we compare the energy efficiency and data correctness between several techniques in jamming scenarios.The exhaustive comparative analysis provides valuable insights. The remainder of the paper is structured as follows.Section 2 provides a detailed and exhaustive review of the state of the art of works focused on clustering, jamming, and self-healing.Next, the contributions of our work are listed and presented to remark on the findings and contributions to the unexplored topic.Then, Section 3 provides a detailed description of the system model.Next, in Section 4, we discuss the applicability of different clustering strategies under jamming.Later, in Section 5, we describe our novel proposed algorithm to overcome the jamming attacks in the WSN context using different techniques.Then, in Section 6, experimental data is discussed.Consequently, Section 7 discusses, explains, and details the main remarks from the scenarios.Finally, Section 8 concludes the paper and considers some directions for future work. Related Work The use of clusters in combination with self-organization and automatic configuration algorithms can overcome diverse challenges in the reliability and security of wireless networks [8][9][10][11][12][13][14][15][16][17][18][19].In this sense, the most used clustering protocols are LEACH [8] and its variants: HEED [9], PEGASIS [10], EECS [11], and TEEN [12].By the use of hierarchical routing based on clustering, these protocols guarantee data communication with energy efficiency.The principal differences among the protocols are based on the selection of the CHs, the hops between the nodes, the construction of the routing path, and the cost function to select the nodes.In addition, the security problem is not considered in the creation of these clusterization protocols.Some security-oriented LEACH variants have also been proposed, such as SLEACH [13], SecLEACH [14], Armor-LEACH [15], and MS-LEACH [16].In [13], the authors propose the use of two symmetric keys for each node that are shared with the gateway to improve the authentication process.However, the authors of SecLEACH [14] state that SLEACH does not provide a complete and efficient solution for node-to-CH authentication problems.By the analysis of compromised links, the authors show that their proposed scheme improves security.However, energy efficiency is not completely met due to the generation of key pools and their distribution in the network.To improve the energy efficiency of SecLEACH, Armor-LEACH is proposed [15].Using the SecLEACH algorithm and the Time-Controlled Clustering Algorithm (TCCA) as the basis, the security and energy efficiency of the network improves. The MS-LEACH protocol [16] combines single-hop and multi-hop transmissions, which results in an improvement in the network lifetime with respect to LEACH.Nevertheless, jamming is not taken into account.The authors of SEC in [17] propose a combination of SPINS [18] and LEACH protocols to improve network security.With the addition of data authentication and data freshness, the gateway is capable of verifying the authenticity of the data and CHs.However, neither work considers the possibility of jamming attacks and nor take into account performance metrics such as information transmitted or energy efficiency.Finally, the Enhanced SLEACH protocol is proposed in [19].With the use of pairwise keys among the cluster members and their respective CHs, it outperforms SLEACH in security, lifetime, and energy consumption aspects.However, as it happens with all the above approaches, it does not consider the presence of jamming attacks and their impact. Moreover, in the last few years the use of clustering for energy-efficiency maximization has been extensively investigated [20][21][22][23][24][25][26][27][28][29][30][31][32].The works in [20][21][22][23][24][25][26][27][28] focus on the residual energy (or remaining battery power), while they coincide in that the cluster head (CH) selection is critical to extending the network lifetime.Using Bee Colony Optimization [20], Fuzzy Logic [21,28], Butterfly Model [22], Hierarchical Clustering [23], Firefly Algorithm [24], Rider Optimization Algorithm (ROA) [25], Particle Swarm Optimization [26] and Fog Logic [27], these works improve the CH selection in different scenarios.On the other hand, some works focus on clustering and partition problems [29,30].Using machine learning techniques and proposing novel clustering routing protocols, they aim at balancing the energy efficiency of the routing path.However, none of these works deal with security or jamming.The authentication issue is considered in the IoT context [31,32].However, the problem of authentication in the context of the cluster with a focus on energy efficiency is only considered in [31].In [33], jamming is considered in a specific clustering context.The work proposes a novel strategy that aims to detect and eliminate the jammer node present in the network to improve energy efficiency.Using the packet data rate (PDR), round trip time (RTT), packet loss ratio, and RSS, the hyperbolic spider monkey optimization (HSMO) algorithm detects and eliminates the jammer node.However, the work does not consider a specific jamming strategy or model the jammer node.Therefore, the PHY parameters or channel models are inexistent owing to the focus of the work.Additionally, this work assumes that the edge and sub-edge nodes use a different frequency channel to communicate, which is no minor requirement. The cooperation scheme in the WSNs context is used primarily to improve energy efficiency [34,35].Cooperation schemes are based on the use of multi-hop communications.The evolution of these schemes has demonstrated the benefits of the selection of relays, which required considering the resources available by the nodes.In [34], the authors proposed a cooperative scheme in which the relays activate the receiving circuitry based on switching probability.Then, the ON-OFF probabilities were adjusted to reduce power consumption and maximize the effective information transmitted.This proposal was improved in [35] by considering the use of multiple antennas in the sensor nodes, as well as power control.However, neither proposal considers how to keep the network running during a jamming attack. On the other hand, the relationship that has been established today between WSNs and unmanned aerial vehicles (UAVs) is undeniable [36][37][38].In [36], a comprehensive review is provided of the main applications of WSNs, UAVs, and monitoring technologies.The mobile sink-based solutions have triggered the investigation of UAVs as data mules.In [37], the authors addressed the problem of optimizing the UAV path through all the sensor nodes distributed over a large agricultural area to reduce its flight time and increase the nodes' lifetime.In addition, the authors proposed an efficient algorithm for discovering and reconfiguring the activation time of the nodes.Meanwhile, in [38], an energy-efficient and fast data collection scheme was designed in UAV-aided WSNs for hilly areas with the help of a UAV as a data mule.The authors applied a modified tabu search algorithm to optimize the UAV position to collect data from a group of nodes and the traveling salesman problem to achieve fast data collection.We consider the crucial role of UAVs in smart outdoor applications such as agriculture, transportation, health, and public safety.Therefore, the parameters determined in this work consider the possibility of using UAVs to perform determined functions. However, we wish to emphasize that the assistance of WSNs by UAVs does not guarantee the operation of the network against jamming attacks, as shown in [39,40].In [39], the authors propose an anti-jamming scheme for collecting data in the presence of jamming attacks.A clustering approach is used to minimize the points that UAVs visit and guarantee that the message transmitted from the cluster heads reaches the UAV.The probabilistic channel model presented and the constraints imposed on the UAV and devices suggest that the jamming attack can be a constant type.The results show that the proposed approach surpasses the optimal solution, min k-means and genetic algorithm, and genetic algorithm without clustering.Nevertheless, the work does not consider the link communication between the CH and the clustered nodes.Therefore, clustering strategies such as cooperation or power allocation are not considered.Moreover, the energy efficiency of the information transmitted is not part of the study.On the other hand, in [40], a similar approach to improve the data collection and trajectory of the UAV on the network is employed.Jamming is considered, but from the defensive side as a countermeasure.Meanwhile, from the attacker's side, only eavesdropping is used.Consequently, the jamming attacks in network communications are not part of the study. Contributions Clustering techniques with self-healing capabilities have not been completely investigated in the literature.Moreover, previous works only analyze such techniques in scenarios free of attackers.In this article, unlike previous works, we analyze the performance of different strategies in WSNs under jamming attacks.In addition, we propose a novel strategy that combines clustering, power allocation, cooperation, and load-balancing techniques to ensure network self-healing.Using the metrics of residual energy, network lifetime, coverage, and amount of transmitted information, we analyze the potential of combining these techniques under jamming attacks.Thus, the main contributions of this paper are as follows: • We analyze different clustering and self-healing techniques with power allocation and cooperation capabilities in scenarios with jamming attacks.For each scenario, we provide a detailed qualitative and quantitative analysis of the algorithms and techniques with their strengths and weaknesses. • We propose a novel adaptive clustering-based self-healing algorithm that combines power allocation, cooperation, and load balancing.This novel algorithm is tested in the previously ignored presence of a jamming attack, ensuring efficient network operation in such conditions. • We describe the advantages and disadvantages of each technique in jamming scenarios in terms of residual energy and transmitted information.The exhaustive analysis of routing paths, energy efficiency, and transmission powers insightful information on the behavior of the network against jamming attacks. System Model We assume N sensor nodes are distributed in a flat square area of H × H meters.The sensor nodes communicate with a gateway (or sink) node located in the center of the square area, as shown in Figure 1.The distance of pairwise nodes is expressed as D. The WSN nodes use time division multiple access (TDMA).Moreover, the nodes are static, and their positions are known to all nodes.This information allows each node to execute clustering algorithms in a distributed way and previously know the most convenient associations.When the system operates in cooperative mode, all the nodes can assume a relay role and use TDMA to transmit their data.Additionally, we assume that nodes can implement techniques that allow them to estimate the presence and location of a static jammer.Although this point is out of the scope of this work, note that nodes can implement energy detection techniques and share this information with neighbors, which would allow the execution of location techniques.For a better understanding of jammer localization in multi-hop wireless networks, the following survey [41] can be consulted.Moreover, there are recent jammer location techniques that are efficient even in the face of jammer mobility [42] and of multiple jammers [43]. The scenario considers a flat outdoor environment.We assume that no obstacles are in the line-of-sight (LOS) between the nodes.Moreover, we assume that the jammer's location is determined by its decision, but we limit ourselves to considering the most representative potential locations to carry out a viable number of experiments. It might be thought that the self-healing capability is not required for WSNs, considering that a UAV can fly over each sensor node and collect information from it, as described in [37].However, from the analysis of [37], we can conclude that collecting information from all nodes in a network is an energetically expensive task, especially in WSNs with many nodes.In such scenarios, guaranteeing the freshness of the information would be an infeasible task for a single UAV.Therefore, if a network is attacked and its nodes are isolated from the sink, it is convenient for the network to be capable of self-healing, at least in interconnected groups, in such a way that the UAV would only have to collect information from the leader of each group.A network composed of N sensor nodes distributed in a square area with H edge-length and equal space D between nodes.To improve the readability of the nodes and positions on the network, we use a chessboard approach to simplify when we speak about a particular position along the work. Channel Model The log-distance model is a suitable channel model for this flat scenario with small distances between nodes [44], in which the received power is calculated as where P t is the transmit power, d tr is the distance between transmitter and receiver, α is the path-loss exponent, and k accounts for other factors such as the wavelength, height, and antenna gains [44,45].Consequently, the signal-to-interference-plus-noise ratio (SINR) perceived by a receiver in the presence of a jammer when the ith node transmits is where P i and P j are the transmit powers of the ith node and jammer, respectively; d ir and d jr are the distances from the ith node and the jammer to the receiver, respectively; n 0 is the noise power spectral density, and W is the channel bandwidth.Therefore, if the transmit power and location of the jammer are estimated, it is then possible to estimate the minimum transmit power (P * i ) required by the ith node to satisfy the SINR threshold (γ 0 ) at the receiver, according to a maximum transmit power (P max ) constraint, Jamming Attacks Every communication system that works with data is exposed to security threats.Non-authorized users of the communication system may want to access, manipulate or destroy the data.In consequence, the communication system needs the timely identification of each attack class to neutralize them. The WSNs may be exposed to two possible main attack classes: eavesdropping and jamming.In eavesdropping attacks, the spy user aims to passively listen to the information.This type of attack is normally combated by encrypting the information.On the other hand, jammers generate radio frequency signals in the same operation band and the main objective is to interrupt the system operation.The jammer can assume different operating strategies: • Constant jammer: The radio signal from the jammer is emitted constantly in the communications channel.Consequently, the jamming signal and the legitimate signal collide almost all the time, which provokes the discard of the packets on the receiver side.However, this strategy demands excessive energy to perform the emission of the radio signal and it is easy to detect.• Random jammer: The radio signal is randomly generated, decreasing energy consumption and making it difficult to predict the attack.Therefore, the probability of a collision is less than in the constant strategy but with higher stealth. • Reactive jammer: This strategy takes advantage of the ability to hear the communication channel.The jammer device records the different sniffed data transmitted in the channel.Next, the attacker decides to emit the radio signal to target specific data packets.This capability improves energy consumption and stealthiness of the jammer. A WSN may apply different strategies to defend against attackers.These strategies depend on the successful detection of attacks and the timely identification of the attack class to execute the defense mechanisms.Therefore, it is crucial to successfully identify whether an anomalous situation is due to an attack or other system factors.Several studies have shown that the use of metrics of received power, energy consumption, packet data rate (PDR), and bit error rate (BER) permit the detection of attackers in the majority of cases [46][47][48]. Energy Consumption Model We consider the network lifetime as the operation time from when the batteries of the devices (B i ) are full until the battery of any of the nodes is completely depleted.Since our research focuses on countering jamming attacks, for simplicity we assume that at the beginning of the attack, all devices had the same energy charge, equal to the total battery capacity, i.e., IoT nodes consume energy in information acquisition and processing, reception and transmission of information, as well as other scheduling and synchronization functions [34,35,49].However, we abstract in this research from all functions not related to communication, to compare the impact of different transmission control and/or cooperative communication strategies.The consumed energy for each process is calculated considering the meantime of the process and the components involved [34,35]. The energy consumed during transmission (e t i ) by node i can be estimated by considering its transmit power P i , the power amplifier efficiency η, the transmission circuit operating power P ct , and the transmission time, which is determined by the length of the message L and the transmission rate R, Similarly, for the reception, we calculate the energy consumption as where P cr is the energy consumed by the circuitry in the reception process.Therefore, the energy consumption of node i after transmitting t i messages is The implementation of the self-healing strategies proposed in the following section implies that some nodes, designated as cluster heads, perform the function of information sinks, so their energy consumption depend on the number r i of messages received Moreover, some of the proposed strategies are based on cooperative communication between nodes, so the energy consumption associated with the cooperation of these nodes depends on the number c i of messages in which they act as relays, receiving and retransmitting the information, Therefore, the residual energy of node i depends on the self-healing strategy used, the role that the i-th node plays in its cluster, as well as the number of messages it receives (if it is a sink) or the number of messages it sends and those in which it cooperates (if it is a relay too).However, it should be noted that when transmit power control techniques are also used, the cost associated with forwarding the information will also depend on the relative distance of the node to which the information will be sent.Therefore, in such cases, if the relays of the next level change, the energy cost of sending the information to them changes as well. Baseline Clustering Strategies under Jamming In the face of a jamming attack, one of the first countermeasures must be to isolate the jammer.However, the location of the jammer is established by the attacker, and the most damaging position for the network is usually chosen, e.g., the sink node neighborhood.Therefore, it is often necessary to establish new sink nodes and clustering is used for this purpose. Once the jammer is detected, its location and transmission power are estimated (which is considered resolved and out of the scope of this paper).Then, the cluster formation and selection of CHs is carried out.Using the K-medoids technique [50], we generate the clusters assigning an optimum number of centroids or CHs.For that sake we use as a metric the interference-plus-noise (IPN) that would be perceived from the location of each cluster node.The CH position must meet the following requirements to be chosen as valid: • Reduce the attacker's impact on the majority of the nodes in the cluster. • Minimize the number of isolated nodes. • Provide the highest number of associations. The first two criteria can be incorporated into the clustering when considering the Chebyshev distances [50] and the third one is used to select each CH.The clustering process itself leads to a trade-off between two options: many small clusters whose internal communication is not affected by the jammer; and a larger number of clusters that allows data collection from emerging sink nodes, e.g., through a UAV. The clustering strategies that we propose take into account both purposes, which does not guarantee that all nodes of the same cluster can communicate directly and successfully with the selected CH using the same transmit power as when there is no jammer.To guarantee effective communication in such cases, next, we discuss baseline strategies exploiting transmit power control, cooperative communication, and a combination of both. Power Domain The use of optimum transmit power improves energy efficiency and extends the network lifetime.Therefore, the comparison between strategies that use fixed power (FP) in all nodes with strategies that use power allocation (PA) conveniently is the first starting point to establish benchmarks in our research. However, in practice the utilized hardware and/or legislation may impose upper and lower limits of potential transmit powers.Therefore, in some cases, the necessary power to transmit and ensure the total reception of the messages cannot be ensured.Consequently, some messages will be lost by the effect of the disruption signal generated by the jammer.Based on [47,51], exists an SINR threshold that permits decoding the majority of packets correctly.Therefore, meeting this requirement according to the established maximum transmit power constraint determines the nodes that can communicate directly with their CH in each cluster. Fixed Power (FP) In this strategy, we assume that clustered nodes use fixed transmit power to reach the CH.Therefore, the transmit power is chosen to ensure that the nodes communicate with the CH in a single hop.Setting the transmit power fixed to the required upper limit guarantees intra-cluster communication, but implies unnecessary power consumption for many of the established links. To illustrate the energy cost of this strategy, we establish a fixed transmit power for all nodes that guarantee communication.Therefore, the communication link between the node most affected by the jammer and the corresponding CH will be the constraint in the transmit power.Consequently, the fixed transmit power is acquired according to (3). Power Allocation (PA) Instead, this strategy assumes that the nodes can individually set their transmit power to reach their respective CH.Therefore, each node calculates the optimum transmit power that ensures the correct reception of the message and the CH, according to (3).Consequently, the nodes closer to the CH can use lower transmit powers and reduce their energy consumption.In contrast, the nodes far from the respective CHs demand higher transmit powers, and eventually, they drain their batteries quicker. Cooperation Domain In a WSN, the nodes may assume different roles in the network.By varying the role of the nodes and making new links between them, several new communication paths can be created. Cooperation (CP) In the cooperation case, the nodes can retransmit their data using relay nodes until reaching the CH.However, they use a fixed power.The nodes evaluate the best cooperation route using the Minimum Receiver Sensitivity (MRS) metric to build the retransmission chain.Several nodes can cooperate in this scheme, but the selection between the different routes is based on the SINR value of the weakest link of each route.Once the link with the lowest SINR in each route has been identified, the SINR values of these worst links in each route are compared and the route corresponding to the one with the highest SINR is chosen, i.e., the route with the best worst link.Since in this scheme it is assumed that all nodes use the same transmit power, the smaller (greater) the Euclidean distance between nodes, the greater (smaller) the SINR, as long as the distances from the jammer to the potential relays are similar. Summarizing, for the cooperation scheme, we have the following considerations and assumptions • All the nodes can assume the role of a relay node in the routing chain. • The Euclidean distance is used to find the optimum route considering the distance and hops between the isolated nodes and the CHs. • If two cooperating nodes have the same Euclidean distance, the first node found by the algorithm is chosen. In this strategy, the nodes that by direct connection with the CH meet the SINR threshold (i.e., those that can directly transmit their messages) are considered associated.Then, each isolated node evaluates if the closest associated neighbor serves as a relay, according to the SINR of that link.As new nodes become associated, those isolated ones evaluate new associations that imply more hops.This action is repeated until all nodes can associate or at least all nodes that can associate according to the preset transmit power. Power Allocation and Cooperation (PAC) The fourth strategy uses both techniques, cooperation and power allocation, to route the data.Therefore, the nodes that participate in the cooperation optimize the transmission power to reach the next relay node.Consequently, the nodes involved in the routing chain can reduce their energy consumption and increase their lifetime. Fairness Cooperation with Power Allocation (FCPA) Note that some relay nodes in a WSN may significantly reduce their lifetime to promote cooperation.Therefore, we focus on a strategy that allows load balancing between potential relays and potential paths, as shown in Figure 2. The previous strategies used to overcome the jamming problem provide different approaches to ensure that the isolated nodes reach the sink node or the CH.However, the constraints imposed do not consider energy efficiency, coverage, and the fulfillment of their objectives.Consequently, an algorithm that improves the rerouting path construction and the load balance in energy terms will also contribute to these objectives.To this end, we analyze the weakness of the presented protocols to propose a novel protocol that solves these problems.We call this novel protocol Fairness Cooperation with Power Allocation (FCPA). Thus, next, we propose a novel protocol that permits the relay nodes to cooperate and maximize their energy efficiency in the routing process. FCPA Protocol The Fairness Cooperation with Power Allocation (FCPA) protocol assigns weights to the cooperating nodes to distribute the load.The allocation of weights considers the energy cost of reception and retransmission associated with cooperation. The lifetime of the network is maximized using the following modifications and criteria: • Modifications in the network, such as the appearance of a jammer, its mobility or transmit power change, demand that each isolated node chooses a CH as its sink node using the Euclidean distance that separates it from the CH and the CH from the jammer.Likewise, it chooses the closest neighbors associated with the cluster of the chosen CH. • Since the association of nodes to a CH is subject to minimizing the energy cost associated with communication, when the location of the jammer changes the nodes that remain isolated may conveniently associate with a different CH.Consequently, the number of nodes associated with each cluster may change. • The selection of each relay node is conditioned by the number of hops between the optimum route chosen in the cooperation algorithm.If any route exceeds the preset number of hops limit N hops , then it is discarded.This maximum allowed number of hops is preset by the network manager. • The weight associated with the relay selection is inversely proportional to the energy cost of the cooperation and is determined by the number of alternative valid routes to support the same communication. The weights used to distribute the load between the cooperating nodes are estimated as where e cop i,j is the cooperation energy between the ith node that needs cooperation and the jth node (relay) that provides the cooperation.Then, e cop i,k is the energy of the kth node that can provide cooperation to the ith node in the routing chain, where K is the number of alternative valid routes of the ith node.Finally, this approach generates a tree-based clustering given the distribution of the load as shown in Figure 3. Simulation Results In this section, we present comparative results of FCPA against the baseline power control and cooperation strategies discussed in Section 4. We evaluate the performance of the self-healing protocols in a WSN under constant jamming attack in terms of energy consumption, the lifetime of the network, coverage, and PDR.To this end, we simulate the protocols in an outdoor and flat coverage area using MatLab R2021b software.Table 2 shows the system parameters used in the validation and evaluation phase.The simulation runs until the first node exhausts its battery, then the data referring to residual energy and collected information are processed.We consider a WSN with 64 nodes uniformly spread, as shown in Figure 4.In the absence of a jammer, it is logical to assume that the sink node is located at the center of the WSN region.Therefore, the most harmful location for a jammer would be that same position, thus annulling the operation of the sink node.That is why first we consider the jammer at the center, the so-called Position 1.Then, to evaluate the impact of the jammer location, always assuming the cancellation of the initial sink node, we consider another six relevant positions according to the symmetry of the proposed scenario.Consequently, the performance of the different strategies is evaluated for each scenario according to the seven potential jammer locations.In all cases, the jammer transmit power is considered fixed and equal to 14 dBm. In the results below, we show the clusters and network structures associated with each self-healing strategy in response to the jamming attack in each scenario according to the self-healing strategies.In addition, we provide data referring to the amount of information transmitted by the WSN and the residual energy of each node. First Scenario The first experiment uses a static jammer deployed in the middle of the network, in the same position as the sink node in DE45.Then, we assess the clustering and selfhealing algorithms to evaluate the effectiveness of these strategies.Firstly in this scenario, it was determined that four is the optimal number of clusters.These four associated clusters are illustrated with different colors in Figure 5. Next, the CH of each cluster was selected considering the Chebyshev distance.Then, each node associates with the CH that guarantees the highest SINR. Figure 5 shows that when the jammer is located in the center of the network, the nodes are clustered into four quadrants of 16 nodes each, while their respective CHs are the nodes located in B2, G2, B7, and G7.It also shows that some nodes are isolated since they cannot communicate directly with their respective CHs, due to transmit power limitations.The symmetry of this first scenario allows a more detailed analysis to be discussed based on a single cluster.To keep the clearness and simplicity of the FCPA algorithm analysis, Figure 5, as well as the figures corresponding to the other scenarios, show the rerouting paths of the isolated nodes to the corresponding CH.But, the routing path of nodes associated with the corresponding CH is omitted. Figure 6a shows the result of the FP algorithm from Section 4. Note that when the transmit power of the nodes and the jammer is the same, some nodes cannot communicate directly with the CH.The blue background nodes are associated nodes, while the nodes with the orange background are isolated nodes, which represent a third of the nodes of each cluster.Additionally, the residual energy from each battery is denoted by the gradient color bar.Operating all nodes with 14 dBm of transmit power is very energy inefficient.When all associated nodes have exhausted their batteries, the battery of the sink node remains with a high energy load.On the other hand, when the power control is activated, we verify that transmitting with 17 dBm allows all the nodes to communicate directly with the CH, even the E4 node, which is the most distant from the CH in the analyzed cluster.To ensure that the comparison between all algorithms is fair in terms of providing coverage to all nodes on each cluster, the non-cooperative FP and PA strategies will use 17 dBm as the maximum transmit power. Figure 6b shows the result of the PA algorithm from Section 4. The transmit power allocation significantly favors the nodes closest to the CH, which can operate with low transmit power, so they remain with residual energy.However, it does not obtain benefits over FP in terms of network lifetime since this is determined by the node that operates with the highest transmission power (i.e., E4).Therefore, the network lifetime when FP and PA are used is the same, allowing only the collection of 804,528 bits.In strategies based on cooperation, the maximum transmit power was kept at 14 dBm to allow any node to be assisted by another node, even when it is not adjacent.Figure 7a shows the result of the CP algorithm from Section 4. CP provides coverage to all nodes even when the maximum transmit power preset is half of the preset in FP and PA.Now, the E4 node is assisted by the E2 node and does not need to reach the CH directly.When the CP algorithm is used, the first nodes that exhaust their batteries are E2, F3, and G4, which attend to the communications of the nodes isolated in Figure 6a. The results show that the nodes that provide cooperation deploy their batteries first.Specifically, the nodes (E2, F3, G4) cooperate with the isolated nodes to retransmit their messages.Therefore, these nodes are under heavy load and are the first that exhaust their batteries.Next, the isolated nodes waste more energy than the CH because they require more power to transmit their message.Because of this, isolated nodes are the second nodes that exhaust their batteries, followed by CH.Finally, the nodes that reach CH in one hop are the last nodes that survive in positions (F1, G1, H1, H2, H3). The scenario that combines cooperation and power allocation has similar behavior.The cooperation algorithm finds the same routes to retransmit the messages.However, in each link between the rerouting chain, the power allocation reduces the energy consumption in the transmission process.Therefore, the cooperating nodes are under a heavy load in contrast with the rest of the nodes in the cluster.Consequently, they can retransmit more information but exhaust their batteries first.The FCPA algorithm takes advantage of the weakness of the previous protocols.Therefore, it provides relief to the nodes under heavy loads.To this end, the algorithm distributes the retransmission of messages to different routes.As a result, the number of nodes that cooperates increases, reducing the load for the nodes determined by the cooperation algorithm.In Figure 8, the routes and the relative load are represented by arrows and markers. The FCPA algorithm significantly improves energy efficiency in the network.As presented in Figure 8, isolated transmitters and cooperating nodes distribute their loads equally.Consequently, the CH node is the first to exhaust its battery.This phenomenon is exclusive to this strategy and shows the robustness of the solution.Then, the cooperating nodes and isolated nodes keep relative residual energy, respectively. In Figure 9, we present the results of the residual energy and information transmitted for the FP, PA, CP, PAC, and the proposed FCPA protocol.The results show that the tradeoff between the residual energy and the information transmitted in the FP is poor.The PA and CP protocols have a better performance than FP with 18.60 and 22.40 times more energy, respectively.However, the CP protocol has less information transmitted with 118,192 kb transmitted since, when operating all nodes with the same transmit power, the relay nodes deplete their batteries first.On the other hand, the PA strategy achieves the same amount of information transmitted as the FP, since the most distant nodes use the maximum transmit power.Therefore, the PA and CP protocols overpass the performance of the FP protocol in residual energy terms.Then, the combination of these two protocols named PAC improves the residual energy left in the network by 7% from the CP.Additionally, the amount of information transmitted increases to 139,158 kb from CP.The amount of information transmitted is the same for the FP and PA protocols.This is explained because both protocols use the maximum power to reach the CH in one hop since the node closest to the jammer cannot reduce its transmit power.Therefore, in both strategies, this node will exhaust its battery first. For the cooperation strategies, the amount of information transmitted and residual energy is slightly better for the PAC protocol than CP.In these strategies, the nodes selected as relays are under more demand.Therefore, the relay node that cooperates with a larger number of nodes will exhaust its battery faster than a node that does not cooperate.Consequently, the relay nodes in the first stage of the re-routing path, which provide the connectivity to the isolated nodes, present a higher load.Finally, our proposed FCPA strategy outperforms the performance of the other analyzed strategies in terms of residual energy and the amount of information transmitted.Its residual energy is less than other strategies, but the information transmitted is larger. Second Scenario In this scenario, the jammer node is deployed in the second position to analyze the self-healing and clustering protocols.Figure 10 shows how the network reacts to the jamming attack.The clusters in green and pink colors adapt to the jammer keeping the same shape.However, the blue and yellow clusters reshape to overcome a strong jammer presence.Consequently, the coverage results are 93.30% for green and pink clusters and 66.67% for blue and yellow clusters. The first steps of the algorithm try to reach the CH in one hop using the maximum power of transmission.Figure 10 represents the jammer effect on each cluster.The green and pink clusters have only one isolated node.On the other hand, the blue and yellow clusters have five isolated nodes in different positions.Then, the algorithm calculates the optimum transmission power to ensure all nodes reach the CH.The result of this calculation is a transmission power of 17.88 dBm.The information transmitted for FP and PA strategies is the same, owing to the use of the same transmission power.The amount of information transmitted reaches 687,024 kb.Similar to the previous jammer position, the transmitter nodes deploy their batteries faster than the CH owing to the difference between the reception and transmission power.However, the PA strategy notably increases the residual energy by 423.77J. The cooperation strategy generates the same paths detected from the previous jammer position.The nodes that provide cooperation have less load owing to the decrease in the isolated nodes in green and pink clusters.However, relay nodes in blue and yellow clusters present the same load as the previous scenario.Consequently, the amount of information transmitted is the same for the CP strategy.Then, the PAC strategy marginally increases the amount of information transmitted to 120,720 kb.Additionally, the residual energy only differs by 26.95 J. Finally, FCPA surpasses all the protocols in the amount of information transmitted with 924,288 kb, while the energy residual is 396.97J. Third Scenario In the third position of the jammer in the quadrant FG45, for the first time, the cluster most affected by the jamming effect chooses a new CH for each one.Blue and yellow clusters move their CH from G2 to F2 and G7 to F7, respectively.On the other hand, green and pink clusters preserve the previous CH. Additionally, isolated nodes in blue and yellow clusters increase to approximately 44%.Therefore, the cooperation and FCPA algorithms choose new routes and relays to achieve isolated nodes.Figure 11 shows the new relays and paths.For green and pink clusters, we observe that isolated nodes decrease to only one for each cluster.Consequently, the jammer significantly damages blue and yellow clusters.The algorithm found a transmit power of 18.50 dBm to surpass the jamming effect based on the most affected nodes corresponding to H4 and H5.Therefore, the FP and PA strategies transmit 602,422 kb, while the residual energy is 11.76 J and 470.13 J for FP and PA strategies, respectively. For the cooperation strategy, the nodes E3, F1, G2, G3, and H3 are defined as relays for isolated nodes.However, the relays E3 and G3 are the first nodes to exhaust their batteries completely.These nodes have a higher number of associations than the other relay nodes.Consequently, they spend more energy on each transmission and reception process.As a result, the residual energy is almost the same for CP and PAC strategies with 533.49J and 536.93 J, respectively.For the information transmitted, we acquire 118,192 kb and 192,278 kb for CP and PAC, respectively.Then, the FCPA algorithm provides the best scenario again, the amount of information transmitted reaches 914,288 kb.The residual energy is 415.50J. Therefore, the FCPA algorithm provides the best tradeoff between these metrics. Fourth Scenario Figure 12 shows that the CHs are preserved for the green and pink clusters when the jammer position is in the quadrant GH45.Again, the blue and yellow clusters are the most affected by the jammer.Additionally, the number of isolated nodes and their locations are the same.The nodes H4 and H5 are the nodes that suffer a higher impact from the jammer.However, the required transmit power to overcome the jamming is 17.90 dBm.For the FP and PA strategies, the information transmitted increases to 687,024 kb, while the residual energy is 7.79 J and 462.53 J, respectively.As explained earlier, the cooperation strategies of CP and PAC have a poor performance in the information transmitted in the network, with results below 200,000 kb.However, in this jammer scenario, both strategies surpass this value and reach 245,868 kb and 387,763 kb for CP and PAC strategies, respectively.On the other hand, the residual energy is 465.77J and 480.54 J, respectively.As in the previous scenarios, the difference is marginal for the residual energy.The FCPA strategy results show a notable performance, with the information transmitted being 914,288 kb and residual energy of 462.53 J. Fifth Scenario The jammer in the quadrant EF34 provokes 38% of the isolated nodes in the blue cluster.Now, node E4 changes its cluster and CH association to the yellow cluster.This can be appreciated in Figure 13. The clustering determines a power transmission of 19.05 dBm to surpass the jammer effect.This value is based on node E4, which requires this transmit power to reach the CH on position G7.Consequently, the amount of information transmitted for FP and PA strategies is 534,835 kb for FP and PA strategies.The residual energy is 14.92 J and 487.71J, respectively. The cooperation strategies choose nodes E2, F1, F2, G3, G4, and H3 as eligible relays.However, only relays F1 and H3 exhaust their batteries.Similar to the previous scenarios, these relays have more associations.Consequently, the amount of information transmitted reaches 245,869 kb and 300,262 kb for CP and PAC strategies, respectively.These results are similar to the previous scenario and demonstrate the impact of cooperation in this scenario.The residual energy acquired is 72.98% J for CP and 79.52% J for PAC.For the proposed strategy, FCPA, the information transmitted is 914,288 kb, while the residual energy is 72.27%.Thus far, the results demonstrate the excellent tradeoff of FCPA strategy for the analyzed scenarios.Fifth scenario: where the blue cluster is notably the most affected, with multiple isolated nodes. Sixth Scenario In this scenario shown in Figure 14, the jammer is located in quadrant EG23, the CH of each cluster varies notably from the previous scenarios.The most affected cluster, the blue, changes its CH selection to E3.This change permits that node in position D3, which was previously from the green cluster, to now associate with the blue cluster.Additionally, several nodes in column H change their association to the yellow cluster. The number of isolated nodes is 23.44% of total nodes in the network.As a consequence of the high impact of the jammer, the relay nodes will have a higher load to provide communication.On the other hand, the algorithm calculates that, with a transmission power of 23.45 dBm, the isolated nodes can communicate with the CH. The information transmitted for FP and PA is only 199,171 kb, due to the high transmit power.The residual energy is 30.66J for FP and 559.28 J for PA.The cooperation strategies use several relay nodes to connect all the nodes to the CH.This is explained by the relative distance between the jammer and the isolated nodes in column G.Note that as the associations that a relay can provide are reduced, the load per relay increases.The information transmitted for cooperation strategies is 94,937 kb for CP and 159,142 kb for PAC.The residual energy for these strategies is 559.28J for CP and 566.80 J for PAC, with the worst tradeoff for the analyzed scenarios.Finally, the information transmitted for FCPA is 914,288 kb with a residual energy of 424.34 J.The main difference between the PAC cooperation strategy is the balanced use of the relays.As a consequence, the CH nodes exhaust their batteries first.However, in this scenario, the number of hops increases significantly for isolated nodes in column H. Seventh Scenario Finally, the most damaging effects of the jammer happen when it is located in the quadrant GH12.For this scenario, the CH for each cluster is distributed in an asymmetrical form.However, the unique isolated nodes from the blue, green, and red clusters are in a perfect diagonal relative to their corresponding CH.However, only the blue cluster has 50% of isolated nodes.In Figure 15, a unique event occurs, the node H1 cannot communicate with any neighbor, using the maximum transmit power of 14 dBm.This node requires a transmit power of 18.28 dBm to reach the closest neighbor and 19.45 dBm to reach the CH. For the cooperation strategies, the information transmitted is 245,868 kb for CP and 328,092 kb for PAC.Then, the residual energy is 484.50J for CP and 517.78 J for PAC.Here, the relay nodes in positions E2 and G4 have the highest number of associations and exhaust their batteries first.Finally, the information transmitted for FCPA is 914,288 kb and the residual energy is 409.66J.As for the cooperation strategies, the FCPA strategy cannot provide cooperation to node H1 owing to the maximum transmission power being limited by the transceiver hardware. Main Findings From the attacker's point of view, the positions located near the contour of the network have certain benefits.In these positions, the attacker does not need to gain physical access to the property to perform the attack or deploy the jammer node.Therefore, scenarios four and seven extrapolate this condition and its impacts.Additionally, on average, the results show that the sixth scenario is the most detrimental to the WSN, with only 313,341 kb of information transmitted.On the other hand, the first scenario is the one with less residual energy at 56.66%, but the information transmitted is 629,810 kb.Considering the average results of the strategy on the analyzed scenarios, the power control domain surpasses the cooperation domain.The information transmitted for FP and PA is 571,963 kb versus the 137,976 kb and 215,354 kb by CP and PAC, respectively.Nevertheless, the residual energy of FP is very low, with a final value of 2.06%.Consequently, the self-healing strategy selection must consider the trade-off between WSN performance metrics. FCPA stands out among self-healing strategies with 967, 961 kb of information transmitted and 63% of residual energy on average.Whenever FCPA was used, the first node to exhaust its battery was a cluster head, demonstrating the high energy efficiency of this strategy.The integration of multiple techniques that increase the efficient use of resources is essential in IoT networks with devices with limited resources.On the other hand, UAVs can assist self-healing strategies by data collection from small clusters. To summarize the results acquired from both metrics, we tabulate the data in Table 3 and show results in Figures 16 and 17. For all the scenarios, the FCPA strategy exhausts the batteries of the CH nodes.Therefore, it reaches the maximum information transmitted, while the residual energy is always over 50% for all scenarios. Figure 16 shows the results from the jammer in the positions moving horizontally, while Figure 17 shows the results from the jammer in the positions moving diagonally.The algorithms FP, PA, CP, and PAC have a poor tradeoff since they cannot distribute the load evenly, so the best performance was always achieved by FCPA.In Figure 17, the tradeoff is worse than in Figure 16.Consequently, the jammer moving into a formed cluster notably impacts the integrity of the cluster and its communications. Conclusions In this study, we proposed a novel strategy of clustering and self-healing for WSNs under jamming attacks, called FCPA.On average, cooperation strategies with power control transmit two times more information than those with fixed power.However, power control strategies without cooperation exhaust the energy rapidly.But, despite the benefit of integrating both techniques, FCPA outperforms the other self-healing and clustering strategies in terms of information transmitted and residual energy, thanks to the load balancing that it implements. Integrating cooperation and power control, named the PAC strategy, reduces the disadvantage of each approach.However, PAC does not balance the cooperation routes.Therefore, the network relay nodes spend their energy quickly, but FCPA effectively addresses this problem.The FCPA strategy surpasses the information transmitted by approximately 54.09% on average from the power control strategies in all scenarios.All the experiments confirmed that FCPA performs exceptionally in all the analyzed jamming scenarios. In future work, we will evaluate different jamming strategies and propose strategies adaptable to them.We also require studying strategies that allow us to face the presence of intelligent and mobile jammers.Additionally, we intend to address artificial intelligence techniques to predict the effects of non-constant jamming and generate alternative paths. Figure 1 . Figure 1.A network composed of N sensor nodes distributed in a square area with H edge-length and equal space D between nodes.To improve the readability of the nodes and positions on the network, we use a chessboard approach to simplify when we speak about a particular position along the work. Figure 2 . Figure 2. Different routes that the nodes can use to reach the sink node in the network using direct routes or cooperation routes. Figure 3 . Figure 3.The arrows show the different routes from the isolated node to the cluster head.A wider arrow represents a higher transmit power than a thinner arrow.In (a), the direct communication with fixed transmit power is executed by FP, while in (b), the power control is used by PA.The inclusion of cooperation occurs for (c) by CP, where different routes are considered with the same transmit power.In (d), the transmit power control and cooperation algorithms are integrated. igure 4 . Evaluation scenarios: Wireless sensor network with 64 static nodes, and the seven jammer locations considered in the research. Figure 5 . Figure 5. First scenario: the clustered network is represented with the CHs of each cluster, the jammer at Position 1, and the assisted communications when using cooperation-based strategies. Figure 6 . Figure 6.Residual Energy for (a) the FP algorithm with 14 dBm of fixed transmit power and (b) the PA algorithm with 17 dBm of maximum transmit power. Figure 7 . Figure 7. Residual Energy for the cooperative strategies, (a) the CP algorithm with fixed transmit power and (b) the PAC algorithm, where the power allocation is possible.The width of the arrows indicates when the power is fixed (a) and when it can be reduced to the value required by the link (b). Figure 8 .Figure 9 . Figure 8.The FCPA strategy increases energy efficiency through combined power allocation and cooperation strategies.Consequently, the CH of each cluster is the first node that exhausts its batteries.We use arrows with markers to show the preference routes by the algorithm.Arrows with more markers indicate more used routes. Figure 10 . Figure 10.Second scenario: where the network adjusts to the presence of the jammer in the new position.Therefore, not all clusters present the same structure.The number of isolated nodes is higher in the blue and yellow clusters. Figure 11 . Figure 11.Third scenario: where the network adjusts to the presence of the jammer in the new position by the variation of the CH in blue and yellow clusters, increasing the number of isolated nodes for these clusters. Figure 12 . Figure 12.Fourth scenario: where the network chooses the same previous CH to provide communication on the clusters.However, the number of isolated nodes decreased significantly owing to the new position of the jammer. Figure 13 . Figure 13.Fifth scenario: where the blue cluster is notably the most affected, with multiple isolated nodes. Figure 14 . Figure 14.Sixth scenario: where the jammer is in the quadrant EG23, several nodes from the blue cluster are now associated with the CH of the yellow cluster.Additionally, one node from the green cluster decides to associate with the CH of the blue cluster. Figure 15 . Figure 15.Seventh scenario: where, exceptionally, there exists a node that cannot communicate with any neighbor node. Figure 16 .Figure 17 . Figure16.Bar plot for residual energy in J and information transmitted in kb for scenarios with the jammer moving along the x-axis.The closest difference between the bars from the same strategy shows a good tradeoff between the analyzed metrics.However, the information transmitted is the priority owing to the objectives of WSN. Table 1 . Comparison with related work. Table 2 . System parameters used for the simulations. Table 3 . Results of information transmitted and residual energy in percentage for all the scenarios analyzed in this work.
12,940.2
2023-08-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
A novel tri-band T-junction impedance-transforming power divider with independent power division ratios In this paper, a novel L network (LN) is presented, which is composed of a frequency-selected section (FSS) and a middle stub (MS). Based on the proposed LN, a tri-band T-junction power divider (TTPD) with impedance transformation and independent power division ratios is designed. Moreover, the closed-form design theory of the TTPD is derived based on the transmission line theory and circuit theory. Finally, a microstrip prototype of the TTPD is simulated, fabricated, and measured. The design is for three arbitrarily chosen frequencies, 1 GHz, 1.6 GHz, and 2.35 GHz with the independent power division ratios of 0.5, 0.7, and 0.9. The measured results show that the fabricated prototype is consistent with the simulation, which demonstrates the effectiveness of this proposed design. Introduction In modern wireless communication systems, the ever increasing demand on the high-performance indicators has led to plenty of investigations for radio frequency (RF)/microwave devices. Owing to the indispensability of the power dividers (PDs) and filters in RF/microwave front end devices, vast research has been dedicated to the PDs [1][2][3][4][5][6][7][8][9] and filters [10][11][12] with various performances over the past decades. Hereinto, in order to achieve the multi-band concurrent operation, a number of PDs have been reported, which includes arbitrary power division [1,2,9], controllable frequency ratio [4,5], and multi-way transmissions [1,6]. The combination of the multi-way transmissions and arbitrary power division is acquired by using a two-section dual-frequency transformer in each way [1]. However, its circuit size is very large. In order to further improve the isolated frequency band, a dual-band Gysel power divider (PD) is reported [2] based on two Schiffman phase shifters. In addition, dual-band PDs with controllable frequency ratio are designed by using the lumped elements [3] and the composite right-and left-handed transmission lines [4], respectively. Furthermore, an earlier reported dual-band PD [5] makes a breakthrough in multi-way application with equal power division. However, the aforementioned PDs are only applicable to dual-band application. And it is hard to design for more than two frequencies. Even so, there still exist several triband PDs [6][7][8]. For example, a tri-band PD [6] is implemented based on a three-section a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 transmission line transformer. However, the closed-form formulas for parameters solution have not been obtained and extra optimization is usually required. A novel impedance transformer [7] transforms the derived admittances at three frequency points and completes the design of a tri-band PD. Note that the reported PD in [8] using embedded transversal filtering sections can realize single or multi-band capability, as demonstrated with a quad-band example. Though the multi-band PDs with satisfactory performances have been extensively investigated, independent power division ratios at arbitrary operating frequency have not been achieved. The main limitation of the presented frequency-dependent transformer [9] is the dual-frequency operation. To the best of the authors' knowledge, there is few reported research on the multi-band PDs with independent power division and impedance-transforming function. In this paper, an original TTPD with independent power division ratios at arbitrary frequency is proposed. And a novel LN with multi-band application is designed and analyzed in detail, which is smaller in size than the PI-type impedance transformer [13]. The LN utilized in the TTPD fulfills the transformation between the equivalent input impedance and the terminal impedance at each operating frequency. Furthermore, the complete design methodology of the TTPD and the formulas for parameters calculation are presented. For theoretical verification, the ideal TTPD is simulated with the ideal electrical parameters calculated by analytical equations. For experimental verification, the microstrip prototype is fabricated by employing the normal printed circuit board (PCB) fabrication technology. The presented TTPD operating at the center frequency of 1 GHz, 1.6 GHz, and 2.35 GHz with the independent power division ratio of 0.5, 0.7, and 0.9 is designed, simulated and measured. A good agreement between the simulated and measured results is observed, which demonstrates the effectiveness of this design. (1), the equivalent input impedances R 2i and R 3i in the two output paths and the input terminal impedance R 1 satisfy the theory of parallel circuit and impedance match at each selected frequency f i (i = 1, 2, 3). Moreover, the power division ratio k i , which is the ratio of the output power P 2i and P 3i , is related to R 2i and R 3i in (2) for the proposed TTPD [1]. Theoretical design and numerical calculation After rearranging (1) and (2), the equivalent input impedance R 2i and R 3i can be expressed in terms of the input terminal impedance and the power division ratio, as given by (3) and (4). That is, R 2i and R 3i are determined once R 1 and k i are arbitrarily supposed. As shown in Fig 2, the subscripts S and L uniformly represent the meaning of sourceend and load-end. The subscripts a and i represent the specific output path (a = 2, 3) and the specific frequency f i (i = 1, 2, 3) in the following equations. The LN plays the role of impedance transformation between the equivalent input impedance R ai (a = 2, 3; i = 1, 2, 3) and the output terminal impedance R a (a = 2, 3) as shown in Fig 2(A). Moreover, Fig 2(B) and 2(C) provide the schematic of the LN in each output path including the MS and the FSS. The FSS is expressed by its equivalent input impedance jX ai (a = 2, 3; i = 1, 2, 3). The MS including a cascaded transmission line is represented by its ABCD-matrix in (5). Hereinto, Z a is the characteristic impedance of the MS and E a is the electrical length of the MS at the specific frequency f i (i = 1, 2, 3). The equivalent input impedance Z li in (6) is calculated in the direction from the left-end of the MS to the source-end and its conjugate expression is given in (7). Eq (8) is enforced for the condition of impedance transformation between the left-and right-side of the MS [14]. After rearranging (8) and extracting its real and imaginary parts, the equivalent input impedance jX ai of the FSS is obtained in (9). Then, substituting (5) into (9), the electrical parameters Z a and E a of the MS are constrained by (10). In addition, since (10) is derived at arbitrary specific frequency f i (i = 1, 2, 3) corresponding with independent power division ratio and input port impedance. Thus, (10) can be expanded into three equations according to its respective conditions at each frequency for tri-band Tri-band T-junction impedance-transforming PD with independent power division ratios application. Besides, this general equation for multi-band application has its restrictive conditions for the solution. The existence of the solution is related with the number of equations and variables. Considering the tri-band design, three expanded equations from (10) are to be solved. As a result, an extra variable parameter should be introduced under the premise of two existing variable parameters (Z a and E a ). In this paper, the output terminal impedance R a is regarded as an additional variable parameter to solve this problem. In addition, the electrical lengths E ai (a = 2, 3; i = 1, 2, 3) in the FSS can be calculated by (11), (12) and (13) based on the detailed derivation in [13]. Hereinto, the characteristic impedance of these open-/shortedstubs are supposed as Z 0 (= 50 O) for ease of calculation. Finally, the design steps of the proposed TTPD in this paper are summarized as follows: 1. Determine the three frequency points f i (i = 1, 2, 3) as 1 GHz, 1.6 GHz, and 2.35 GHz with the corresponding power division ratios k i (i = 1, 2, 3) of 0.5, 0.7, and 0.9 in (2). 2. Determine the input port impedance R 1 (= 50 O) and calculate the respective equivalent input impedance R ai (a = 2, 3; i = 1, 2, 3) at each frequency f i in each output path by (3) and (4). Then, subsequently solve the electrical parameters Z a and E a (a = 2, 3) of the MS and the additional variable R a (a = 2, 3) by using the three expanded equations from (10). According to the above steps, Fig 3(A) shows the layout and the calculated ideal electrical parameters of each stub in the output path-2 and path-3. Generally, the E a1 (a = 2, 3) influences three frequency points f i (i = 1, 2, 3), the E a2 (a = 2, 3) impacts two frequencies f i (i = 2, 3) and the E a3 (a = 2, 3) effects only one frequency f i (i = 3). In Fig 3(B), the difference between the output powers at the corresponding frequency of 1 GHz, 1.6 GHz, and 2.35 GHz are 6.6 dB, 3.1 dB, and 0.9 dB, respectively. The tri-band performance of the TTPD with these calculated values are clearly demonstrated. Moreover, two comparative examples at different frequencies with different power division ratios are simulated with ideal electrical parameters. Fig 4(A) illustrates the comparative example including the approximate same power division ratio at different frequencies (1.0 GHz, 2.35 GHz, 3.5 GHz) comparing with the simulation in Fig 3 (B). Next, considering different power division ratios (0 dB, -3.5 dB, -5.8 dB) but keeping the same operating frequencies, another example with its ideal simulated results in Fig 4(B) is shown. Fig 3(B) together with Fig 4 proves the feasibility of the arbitrary three frequencies and the arbitrary independent power division ratios of this design method. Results To prove the above theory, the proposed TTPD is simulated, fabricated, and measured for verification of the effectiveness. The prototype circuit is built on a Rogers 4350B substrate with a relative permittivity of 3.48, a thickness of 0.762 mm, and a loss tangent of 0.0037. The electromagnetic simulation was done by Advanced Design System (ADS). The electrical parameters of the prototype are given in Fig 3 and the physical dimensions of the fabricated circuit are Tri-band T-junction impedance-transforming PD with independent power division ratios Then, the prototype was measured using the vector network analyzer (E5071A). The simulated and measured input reflection coefficient |S 11 | are displayed in Fig 6. The three subgraphs in Fig 6 are extracted at the corresponding three operating frequencies 1 GHz, 1.6 GHz, and 2.35 GHz. Fig 6 illustrates that there is a good agreement between the simulated and measured results. Besides, the tolerable deviation of less than 5% is observed because of the manufacture error, via holes and the degradation of the substrate and the ordinary SMA (Sub-Miniature version A) connectors. The TTPD operates at the center frequency of 0.98 GHz, 1.57 GHz, and 2.31 GHz and the bandwidth in accordance with |S11| -10 dB is about 163 MHz, 71 MHz, and 71 MHz from the measured results. The shift of the center frequency is due to the assembly and fabrication errors and within the allowable range of 5%. Furthermore, Fig 7 reveals the insertion loss responses of the simulated and measured |S 21 | and |S 31 |, which are -0.9 dB and -9.3 dB, -2.2 dB and -7.3 dB, -2.9 dB and -6.0 dB at the three measured frequency of 1 GHz, 1.6 GHz, and 2.31 GHz, respectively. As shown in Fig 7, the three measured amplitude imbalance of |S 21 |-|S 31 | are 8.4 dB, 5.1 dB, and 3.1 dB, which is consistent with the expectation. However, a few minor deviations from the design goals are observed. The difference between the simulated and measured results is most likely attributed to the fabrication and assembly errors. Moreover, the simulated and measured |S 23 | are plotted in Fig 8 to show the isolation response of the PD. It can be found that the basic isolation is obtained without any added Table 1 compares the proposed TTPD with previous multi-band PDs. It indicates the multi-function design in tri-band operation, impedance transformation, and independent power division ratios of this work. Discussion In this paper, a new structure and a complete design method for a TTPD with independent power division ratios at arbitrary frequency are investigated and demonstrated systematically. The main features of this PD including: 1) arbitrary tri-band application; 2) independent power division ratios; 3) simple calculation equations; 4) easy design procedures; 5) Tri-band T-junction impedance-transforming PD with independent power division ratios convenient implementation using microstrip lines. It should be noted that the power division ratio and the frequency ratio would be constrained for meeting the processing requirements in practical fabrication. Therefore, the proper substrate needs to be selected. For example, the thick or thin substrates are more suitable for large or small characteristic impedances, respectively. Under normal conditions, the ideal parameters can be calculated on the basis of the derived formulas. In this paper, the TTPD with independent power division ratios of 0.5, 0.7, and 0.9 are designed at the three frequency points of 1.0 GHz, 1.6 GHz, and 2.35 GHz. The measured frequency bandwidth of 10-dB return loss is about 163 MHz, 71 MHz, and 71 MHz, respectively. And the measured amplitude imbalance of the output ports are 8.4 dB, 5.1 dB, and 3.1 dB at the three measured center frequency of 0.98 GHz, 1.57, GHz and 2.31 GHz. The isolation responses are -10.3 dB, -10.3 dB, and -11.8 dB respectively at 1 GHz, 1.6 GHz, and 2.35 GHz. It also satisfies the basic isolation without any added isolation resistor. It can be seen that the low return loss, independent power division, tri-band performance, and basic isolation can be obtained simultaneously. Additionally, simple calculation, easy design procedures, convenient implementation, and small size make this PD competitive in practical applications.
3,210.2
2017-06-06T00:00:00.000
[ "Engineering", "Physics" ]
Extended Maximal Covering Location and Vehicle Routing Problems in Designing Smartphone Waste Collection Channels: A Case Study of Yogyakarta Province, Indonesia : Most people will store smartphone waste or give it to others; this is due to inadequate waste collection facilities in all cities/regencies in Indonesia. In Yogyakarta Province, there is no electronic waste collection facility. Therefore, an e-waste collection network is needed to cover all potential e-waste in the province of Yogyakarta. This study aims to design a collection network to provide easy access to facilities for smartphone users, which includes the number and location of each collection center and the route of transporting smartphone waste to the final disposal site. We proposed an extended maximal covering location problem to determine the number and location of collection centers. Nearest neighbor and tabu search are used in forming transportation routes. The nearest neighbor is used for initial solution search, and tabu search is used for final solution search. The study results indicate that to facilitate all potential smartphone waste with a maximum distance of 11.2 km, the number of collection centers that must be established is 30 units with three pick-up routes. This research is the starting point of the smartphone waste management process, with further study needed for sorting, recycling, repairing, or remanufacturing after the waste has been collected. Introduction Developing countries such as Indonesia currently have the problem of handling large volumes of electronic waste (e-waste) [1]. It is associated with rapid technological and economic developments, leading to the production of a wider selection of electronic products at more affordable prices [2], thereby increasing public consumption and potential for electronic waste. The Global E-waste Monitor 2017 Quantities, Flows, and Resources ranked Indonesia ninth among the global producers of electronic waste, with smartphones being observed to contribute significantly. It is, however, important to note that the use of smartphones started increasing in 2020 due to the emergence of coronavirus, which prompted people to work and learn from home using online platforms. Records show smartphones are the technological devices with the highest consumption rate (70%), followed by laptops and personal computers [3], but there is no appropriate waste management process [4]. This is indicated by the absence of regulations for the collection and transportation of electronic wastes in Indonesia, with those implemented observed to be limited to informal initiatives. This, therefore, led to the low ranking of the country in the waste management system by the United Nations University. This is one of the major differences between Indonesia and developed countries [5]. A previous study also showed that improper handling of waste is dangerous for environmental sustainability [6]. About 80% of the materials composing smartphones can be recycled effectively [7]. Smartphones contain valuable materials, such as gold, silver, and palladium [8]. Metals in electronic waste, especially smartphones, are present in higher concentrations than in primary ore found in the ground. As an illustration, 300-350 g of secondary gold can be extracted from one ton of smartphones, while every ton of soil in ordinary gold mines only produces 5 g of primary gold [9]. Resource extraction from e-waste is more economical than extracting metal ores from the ground [10]. Thus, smartphone recycling is done because the economic benefits outweigh the costs [11]. Proper management of e-waste is necessary to reduce the problem of metal scarcity [8]. The potential for smartphone waste in Indonesia is quite significant. The total population of Indonesia in 2020 was 270,203,917 people [12]. If 63.53% are smartphone users [13], then the total number of smartphone users is 171,660,549 people. With an average smartphone lifetime of 4.7 years [14], these users produce 36,523,521 units of smartphone waste per year. When this waste is appropriately managed, in addition to minimizing the environmental impact, it can also provide significant economic benefits by producing 5.48-6.39 tons of secondary gold and saving natural resources. However, so far, the amount of secondary metal recovered through e-waste recycling has been limited [15]; this is due to the limited supply of e-waste. A preliminary study conducted on smartphone users in Indonesia showed that 59% save non-functioning smartphones; 21% dispose of them; and the rest give them out to other people, sell them, and use them in other ways. This is because the public does not know what to do with these items. Meanwhile, Yogyakarta is one of the barometer provinces in Indonesia with an improper electronic waste management system through the formal channel. According to previous studies, government drivers are the factor with the most influence on consumers' intentions to participate in smartphone waste collection programs, followed by facility accessibility [16]. This means that the government needs to develop and implement a formal e-waste management system, starting with the e-waste collection process. One of the alternative electronic waste collection programs applicable to Indonesia is the use of a dropbox [17], but Yogyakarta Province does not currently have any collection points for smartphone waste. Therefore, there is the need to provide a convenient collection channel for the consumers, which is expected to be a major starting point for a formal channel to waste management in the area. This study aimed to design a collection channel by determining the number and location of the collection center facilities followed by a transportation route from the collection center to the final disposal site. Facility location is related to the finding of a solution that covers customers using a number of facilities. It is, however, important to note that covering problems are fundamental facility location problems [18], which are often categorized as location set covering problems (LSCPs) and maximal covering location problems (MCLPs). The classic MCLP involves looking for the location of several facilities on the network in such a way that the population covered can be maximized [19]. Church and ReVelle first introduced this model in 1973 at a North American Regional Science Council [20]. The purpose was to maximize the demand covered by a particular service distance by placing a certain number of distribution facilities [21]. Therefore, customers or clients are declared covered when they are within a certain coverage distance from at least one facility [22]. The model is also important in the decision-making of the supply chain process, making it relatively important for practical use [23]. Several previous studies have used MCLP to design models and approaches in determining locations. MCLP is used in both the public and private sectors. In the public sector, it has been applied to determine the spread of an ambulance in emergency services [24], the location of emergency warning sirens [25], the location of medical equipment supply centers [26], the location of treatment centers in the event of a disease outbreak [27], appropriate locations for shelters for those temporarily displaced by floods [28], and the location of a waste cooking oil collection center [29]. Its use in the private sector involves determining the location of bank branches [30]. Several researchers have developed MCLPs. For example, Davari et al. [31] developed a MCLP with fuzzy travel times; Arana-Jiménez et al. [32] developed a fuzzy MCLP; Vatsa and Jayaswal ( [33,34]) modeled a capacitated multiperiod MCLP with server uncertainty; and Cordeau et al. [9] introduced the MCLP algorithm to determine a subset of facilities, maximizing customer requests by considering budget constraints. A continuous MCLP was also developed by Yang et al. [35] to optimize a continuous location of the cellular network's communication centers for natural disaster rescue. ReVelle et al. [36] also solved the MCLP with heuristic concentration which is used to determine a prominent case solution to maximum coverage locations with a high coverage percentage. Ibarra-Rojas et al. [37] developed a MCLP with accessibility indicators for when facilities have limited service areas, while Alizadeh and Nishi [38] used the hybrid covering location problem for strategic and tactical decisions. Alizadeh and Nishi [39] also developed a multiperiod maximal coverage location problem with different facility configurations as an extension of the classic MCLP. Zhang et al. [40] addressed the issue of locating multimodal facilities in emergency medical rescue. The classical MCLP is used to determine the minimum number of facilities to maximize the demand covered by a given service distance. The model does not consider costs; it assumes that the number of facilities is minimal and that the investment costs are also minimal. Because each alternative location is assumed to have the same investment costs, it is necessary to develop a model that considers the difference in investment costs between potential locations. In this study, the collection center to be built is an intermediary facility, so it is also necessary to consider transportation costs to the final facility. Therefore, in this study, we develop the MCLP method by considering the investment and transportation costs, hereinafter referred to as an extended maximal covering location problem (e-MCLP). With this development, in addition to minimizing the number of facilities, it will also minimize total costs, including investment costs and transportation costs. Thus, the developed model is expected to provide an affordable facility location to consumers with a minimum total investment and transportation costs from the collection center to the final disposal facility. The selection of the number and location of the collection centers was followed by the transportation route scheduling plan from the collection center to the final disposal site to determine the optimal route for efficient product distribution. It is defined as the route with the shortest distance and is considered important due to its ability to reduce transportation costs [41]. The vehicle route optimization problem, however, is known as the vehicle routing problem (VRP), which was introduced by Dantzig and Ramser in 1959 to solve the problem of gasoline distribution [42]. VRP is a common discrete optimization problem in transportation and logistics [43]. It is generally an integral part of the vehicle route with the exact delivery location visited once while all the routes start and end at the warehouse [44]. VRP focuses on the distribution of goods from the company's depot to customers and aims to minimize global transportation costs related to distance, fixed costs associated with vehicles and balance routes, and the number of vehicles required to serve consumers [45]. There are three methods of solving VRPs: the exact, heuristic, and metaheuristic methods [46]. However, the exact method is not applicable to a problem with a large input size and a limited time. The methods used in this study include the heuristic and metaheuristic methods. The heuristic method involved the application of the nearest neighbor (NN) method, which has been widely used to solve VRP. Solomon introduced it in 1987 based on the idea of visiting the closest location from every other location visited [47], and it has been observed to be significantly better and to have more realistic performance in route formation than other methods [48]. This led to its wide application in solving the traveling salesman problem [49], determining routes from one city to another [50], designing waste transportation routes [51], and minimizing travel time and fuel consumption for transportation of agricultural products [52]. The nearest neighbor method is quite effective in its application due to its ability to look for consumers based on the closest distance from the vehicle's last location. It is, important to note that the nearest neighbor method produces the route with the shortest distance compared to other heuristic methods [41]. It is also easy to implement and execute the algorithm, but it does not guarantee the best resulting solution [53], so in this study, nearest neighbor was used to determine the initial solution. This research applied the tabu search (TS) method, an algorithm considered to have the ability to produce an optimal solution. It was first introduced by Glover [54] based on the idea that allowing uphill motion helps to prevent the solution from becoming stuck in local optimal conditions [55]. The strength of this method lies in its flexible memory structure [54]. This makes its solutions very similar every time it is applied and makes it better than the other methods, such as simulated annealing and genetic algorithm [54]. Several studies have used tabu searches to solve VRP [56], classical VRP, periodic VRP, multidepot VRP, site-dependent VRP [57], heterogeneous fleet VRP [58], VRP with discrete split deliveries and pickups [59], multicompartment VRP [60], heterogeneous multitype fleet VRP with time windows and an incompatible loading constraint [61], multidepot open VRP [62], VRP with cross docks and split deliveries [63], VRP with private fleet and common carrier [57], time-dependent VRP with time windows on a road network [64], consistent VRP [65], and heterogeneous VRP on a multigraph [66]. Shi et al. [67] also used the heuristic solution method for the problem of multidepot vehicle routing-based waste collection and compared the results with the tabu search. Khan et al. [68] presented a sustainable closed-loop supply chain framework that uses a metaheuristic approach, tabu search, and simulated annealing. Tebaldi et al. [69] determined the best route to visit a set of customers, considering vehicle capacity and time constraints. This result underlies the use of the nearest neighbor approach to obtain an initial solution and the use of the metaheuristic tabu search approach to determine the final solution. Materials and Methods This research was conducted in two main stages: determining the number and location of collection centers and determining the smartphone waste transportation route. The location, number, and capacity of collection centers were determined by developing a maximal covering location problem hereinafter referred to as the extended maximal covering location problem (e-MCLP). The focus of the MCLP is to minimize the number of facilities while ensuring all consumers are covered, but the e-MCLP was developed to consider the costs involved. The model's objective was, therefore, to minimize the total costs, including those associated with investment and transportation from the collection facility to the final disposal site. The costs associated with collecting smartphones are not as high as those for other large volumes of e-waste, but the developed model can be used for other types of waste. The reason for choosing this type of waste is because it has a higher economic value (containing precious metals such as gold, silver, and palladium) than others, with components that allow up to 80% recycling and a large potential for smartphone waste. Meanwhile, for now, informal actors dominate the practice of recycling smartphone waste, which harms the environment. The low collection cost and high economic and environmental benefits are expected to motivate the government to implement the proposed scenario. The development scenario involves two levels of collection center (CC) facilities, namely the primary collection center (PCC) and the secondary collection center (SCC). Consumers collect their waste at PCC. Instead, local governments carry out transportation from PCC to SCC. Transportation routes are needed in this study because smartphones are products with small volumes, so the capacity of the collection center is not as large as vehicle capacity. If one trip only picks up from one PCC, it becomes inefficient because the vehicle's utility is low, and transportation costs will be higher due to many trips being needed. For this reason, it is necessary to consider the route determination in this study. Routing is expected to increase vehicle utilization and save transportation costs. The output of determining the transportation route is expected to be an input for local governments to schedule waste collection. Yogyakarta, one of the provinces in Indonesia, is located on Java Island and has an area of 3178.79 km 2 . It has a municipality and four regencies: Yogyakarta city and Gunung Kidul, Bantul, Sleman, and Kulon Progo regencies, with respective areas of 32.5, 1485.36, 506.85, 574.82, and 579.26 km 2 . These areas contain 14, 18, 17, 17, and 12 districts, respectively [70], as indicated in Appendix A, for a total of 78 districts. These districts were used as candidates for primary collection centers (PCCs) in this study. The parameters used as input in the mathematical model include the distance between the PCCs, the distance expected by consumers, and the distance from PCC to SCC. Yogyakarta Province currently has 3 locations serving as final disposal sites (TPAs). The first is the Regional TPA, commonly called Piyungan TPA in Ngablak, Sitimulyo Village, Piyungan District, Bantul Regency. It is an integrated waste disposal site created to serve Yogyakarta City, Bantul Regency, and Sleman Regency [71]. The second location is Wonosari TPA in Wukirsari, Baleharjo, Wonosari, Gunung Kidul Regency, and the third is the Banyuroto TPA in Dlingo, Banyuroto, Nanggulan, Kulon Progo Regency. The Piyungan TPA has the largest capacity and most strategic location among the three, and this makes it suitable to be used as the secondary collection center (SCC). The candidates for the PCCs are district offices, which means the distances between PCCs are the same as those between district offices, and the distance from the PCC to SCC is the distance from the district office to the TPA Piyungan. The PCC is provided by the government for consumers in the form of a dropbox, while SCC is a waste collection point for all the PCCs in a province. For this research, one SCC was located at the final disposal site in one province while the PCCs were built at the minimum number required to minimize investment costs incurred but with the ability to reach all consumers. Further, a survey conducted on smartphone users, with a total of 325 valid questionnaires, showed the consumers are willing to bring their smartphone waste to a collection facility with a maximum distance of 11.2 km. This means the PCC to be established is based on the number of districts to accommodate the interests of the consumers. Meanwhile, the PCC with the closest distance to the SCC was selected for this research to accommodate government interests by minimizing transportation costs. The PCC is located in the district office, a government-owned facility, and this means it does not require large investment costs since there is no need to procure land and a building, as only the dropbox needs to be prepared. This collection center has the capacity to accommodate all the smartphone waste supplies in the area due to the small product volume. It is important to determine the transportation route to optimize vehicle utility due to the relatively small volume of waste. The location and capacity of the PCC were used to determine the transportation routes by joining the nearest neighbor approach and the tabu search model (NN-TB). The application of the NN was initiated from the starting point, which is the depot/SCC, and directed towards the PCC with the closest distance, which has not been visited due to several restrictions. The solution obtained at this stage is limited to determining the best route and the consumers to be served next based on the nearest point to the vehicle's last location [72]. It has been previously stated that the nearest neighbor algorithm is easy to implement and execute but does not guarantee the maximum resulting solution [53], and this was the reason it was used in this study to determine only the initial solution. Afterward, the tabu search method was used to search for the optimal route. The metaheuristic method is usually applied to solve combinatorial optimization problems, where the combinations are usually used to calculate the number of exchanges to be made in each iteration [73]. The tabu search algorithm is also a mathematical optimization method that guides the iterative search for solutions by providing tabu status for solutions found [74]. Collection Center Determination Steps The parameters used as input in the mathematical model are the distance between PCC candidates, the distance expected by consumers, and the PCC candidate's distance to the final disposal site or secondary collection center (SCC). The distance between PCC candidates and distances between each PCC and SCC were based on Google Maps. The distance matrices between PCC candidates and from the PCC candidates to the SCC are shown in Appendix B. The distance value is essential to determine the number and location of PCCs to be built in the area. The notation used in the mathematical model of e-MCLP is as follows: The number of the district (m = 1, 2, . . . , |m|) k The number of SCC (k = 1) Distance requirements (fulfilled or not) The capacity of PCC at point j Y i Coverage of smartphone waste supply at point i (covered or not) Supply of smartphone waste at point i The basic model was developed from the MCLP [75] in the form of e-MCLP, and its functional objective was to minimize the total cost of the number of facilities to be established within the range wanted by the consumer, as shown in Equation (1). The costs considered include those associated with the investment and transportation from PCC to SCC. Furthermore, the PCC was established in a district office, a government facility, which means there was no need to invest money in land acquisition. Therefore, the only investment needed was the procurement of dropbox, and the value is the same for all candidate locations. It is important to note that the PCC locations selected were those with the lowest investment costs and closer to the SCC. The decision variable X j has a value of 1 or 0, where a value of 1 indicates the point j is selected as a PCC and a value of 0 indicates the point j is not selected as a PC. Dropbox procurement costs are USD 350.37 (USD 1 is equivalent to IDR 14,270.75), and the dropbox service life is 5 years; using the straight-line depreciation method, the annual depreciation cost is USD 70.07 per dropbox. Thus, the investment cost per year is USD 70.07 per dropbox. The vehicle's fuel consumption is 10 km/L at USD 0.67 per liter; therefore the transportation cost is USD 0.067 per kilometer. Equations (2)-(6) are constraint functions. Equation (2) is a limiting function that requires a ij X j to be 1, and this means a minimum of one PCC needs to be established within the range of the consumers' point. Meanwhile, Equations (3)-(5) state that X j , a ij , and Y i are binary, while Equation (6) states that the PCC capacity at point j is the accumulation of the waste supply multiplication at point i by 1 or 0, where 1 means the waste supply at point i is covered and 0 means it is not covered. Smartphones are, however, usually in small volume and not too large a supply due to the estimation of lifespan at two years. Therefore, the PCC capacity value used in this research is 1, which indicates that the entire waste supply was accommodated. Steps to Determine the Transportation Route The method used in this research was the nearest neighbor and tabu search (NN-TS) method, where the results obtained from the nearest neighbor were used as input in the tabu search. It is important to note that the tabu search was initiated by approaching a local minimum and noting recent movements in a tabu list that forms an adaptive memory to explore better solutions, with its size indicating the degree of diversification and intensification [76]. The mathematical model was, however, first determined before the calculations, and this was based on several assumptions and limitations, which include the following: (1) the vehicle has enough capacity to accommodate smartphone waste; (2) the distance from location j(a) to j(b) is the same as the distance from location j(b) to j(a) due to symmetry; (3) collection activities to PCCs start from 08:00-16:00 WIB with a rest time of 1 h, and this means the planning time horizon for a day is 7 h; (4) one vehicle visits more than one PCC but each PCC is only visited by one vehicle; (5) the average vehicle speed is 45 km/h; (6) the loading time at a PCC is 10 min; (7) the unloading and administration time at the SCC is 30 min. The notation used in the mathematical model of VRP is as follows: V The set of all vertices with 0 is a SCC {0, 1, 2, . . . , v} P The There is a trip from PCC at point j(a) to j(b) on trip t or not The objective function of the VRP mathematical model is to minimize the total distance traveled from the route as shown in Equation (7). The decision variable X t j(a)j(b)c has a value of 1 or 0; 1 indicates the selected route when vehicle c travels from PCC at point j(a) to j(b) on the trip t, and 0 indicates when the situation is otherwise. Equations (8)- (14) are constraint functions, with Equations (8) and (9) used to show that the route starts from and returns to SCC. Equations (10) and (11) state that each PCC is served exactly once on one route. Hereinafter, the vehicle's load capacity on a trip is the accumulation of the PCC capacity served, and its maximum capacity is not exceeded, as shown in Equation (12). This is because the supply is not large and the product volume is small, which allows the vehicle to carry the entire supply of smartphone consumers at once. Meanwhile, Equation (13) shows the vehicles going to the SCC to unload. However, the route completion time was calculated from the vehicle's total time plus the service time, which is loading-unloading time, and observed not to have exceeded the planning time horizon in a day, which is 7 h, as shown in Equations (14) and (15). The steps to determine the initial solution using the nearest neighbor method [72] are as follows: a. Select the center point as the starting point of transport, which is the SCC in this study. b. Determine the point with the smallest distance from SCC and move to the PCC point. c. The last point visited is the starting point; therefore, determine the point with the closest distance from the point. d. Repeat the process until the vehicle does not have sufficient capacity for transportation; but because there is always enough capacity of the vehicle used in this research, the repetition is conducted until it meets the planning time horizon for a day but does not exceed it. e. Drag this point to a line which is called a route with the working hours used as a constraint to form a freight route. The Tabu search algorithm used in this study is based on [77,78] and includes the following steps: a. Determine solution representation. This is a sequence of nodes where each is only visible once in the sequence. These nodes represent PCC and SCC. b. Formulate initial solution formation, S. c. Determine the neighborhood solution. This is an alternative solution obtained by moving the nodes such that each move produces a neighborhood solution and the number of solutions is calculated using the following Equation (16): where n is the number of PCCs visited in a route. d. Create a tabu list. This list contains the moved attribute previously found, and its length increases with the size of the issue and also corresponds to the number of PCCs to be visited. e. Find the best solution, S*. f. Fix the tabu list. g. Determine aspiration criteria. This is a method of overturning the tabu status. h. Determine termination criteria. These are used after all predetermined iterations have been fulfilled. The number of iterations selected is the same as the number of points visited because the maximum number of iterations is the same as the length of the tabu list [79]. Number and Location of Collection Centers The number and location of the PCCs were determined using the e-MCLP method. The solver software was used to determine the optimal solution. The calculations showed that 30 PCCs are to be built as shown in Figures 1 and 2 with a distribution of 1 unit in Yogyakarta city (Y6), 13 units in Gunung Kidul Regency (G1, G2, G4, G5, G6, G7, G8, G9, G13, G14, G15, G16, and G17), 6 units in Bantul Regency (B4, B10, B12, B13, B15, and B16), 6 units in Sleman Regency (S2, S6, S11, S12, S13, and S17), and 4 units in Kulon Progo Regency (K5, K8, K10, and K12). The selected PCC numbers and locations are shown in Appendix C. The nearest neighbor method's search for initial solutions started with the 7 h obtained for planning horizon time, a loading time of 10 min for each PCC, and 30 min of unloading and administration time at SCC. This was followed by the determination of the depot as the starting location, which is the SCC. The vehicle has the capacity to accommodate the entire PCC because the supply is not large and the product volume is small; therefore, the planning time horizon was considered. The next step was the determination of the PCC with the closest distance, and this was discovered to be Pleret PCC, which has a distance of 4.3 km from the SCC. The distance matrices between the selected PCCs and from the selected PCC to the SCC are shown in Appendix D. It is important to note that the retrieval process was continued to the next PCC when the completion time (CT) was less than or equal to the planning time horizon but canceled when the completion time was greater than the planning time horizon. Furthermore, the next PCC was determined based on the closest distance with the initial steps implemented when it was discovered not to have been served. It is also important to point out that just one type of vehicle was used. The number of trips or tours required to make the collection was calculated to be 3 with a total distance of 659.1 km, travel time of 14.65 h, and a completion time of 21.16 h, as shown in the sequence presented in Table 1 It was discovered that Route 3 has a longer travel time than the planning time horizon, and it was used as an initial solution in the tabu search method with the expectation that it will improve and provide shorter distances and times for the optimal solution. The tabu search method was applied using the initial solution calculated from the nearest neighbor method. Route 1 was found to be SCC-Pleret PCC-Kotagede PCC-Sewon PCC-Pandak PCC-Bambanglipuro PCC-Sedayu PCC-Dlingo PCC-Playen PCC-Patuk PCC-Ngglipar PCC-Ngawen PCC-SCC. This was followed by the input of the number of elements to be searched, which was found to be in accordance with the points to be visited, i.e., 11 PCCs. The number of neighborhood solutions was later determined using Equation (14), and 55 lines were recorded. Furthermore, the tabu list length was discovered to be in line with the number of PCCs to be visited, which was 11 customer locations. This was followed by the maximum number of iterations, which was recorded to be 11 iterations in line with the number of PCCs. These steps were repeated for the other routes, and the determination of the best route produced three routes with a total distance of 602.2 km, a travel time of 13.4 h, and a completion time of 19.89 h. The time for a shipment was found to be 3 days. Furthermore, the best sequences for Routes 1, 2, and 3 had total distances of 178.3, 198.3, and 224.5 km; travel times of 3.98, 4.41, and 5.01 h; and completion times of 6.63, 6.58, and 6.68 h, respectively, as shown in Figure 3 and Table 2. Discussion The results showed that the city/regency with the fewest PCCs is Yogyakarta city due to the short distance between its districts, with the one PCC established in Kota Gede district being found to have the ability to reach 13 other districts. The farthest is the Tegalrejo district, which is 9.5 km away, and this is also considered to be within the distance desired by the consumers. Meanwhile, most of the PCCs were built in Gunung Kidul Regency due to its large area relative to the other cities and regencies, and this caused quite a long distance between the districts. The area is 47% of the total area of Yogyakarta Province, as shown in Figure 2. Therefore, there is a need to build 13 PCCs in the existing 18 districts to cover all consumers, and the remaining 5 will be accessible because they are less than 10 km from the built locations. For example, Playen PCC covers Paliyan District while Semanu PCC covers Ponjong and Karangmojo districts. It is also possible for the waste from Wonosari District to be transported to Playen or Semanu PCC, while Ngawen PCC covers the Semin district. This problem, if solved using MCLP as done by Church and Davis [70], Murray [17], Boonmee et al. [18], and Hartini et al. [26] to minimize the number of collection centers that must be built, results in the same number of PCCs that must be built as with e-MCLP, namely 30 PCCs; the number of PCCs established in each city/regency is the same, but there are several different locations. The comparison of selected CCP locations from each method is shown in Figure 4. The different locations are Kotagede, Semanu, Berbah, and Kalibawang when using the e-MCLP method. When using the MCLP method, the selected locations are Gondomanan, Ponjong, Minggir, and Samigaluh, as shown in Figure 4. The difference between the four locations will have implications for saving transportation costs from PCC to SCC because of the shorter distance, while the investment costs, in this case, are the same for each selected PCC. Comparison of the distance from PCC to SCC between the two methods is shown in Table 3 Table 4 shows that the distance between the selected PCCs and SCC is shorter in e-MCLP than MCLP. This indicates the numbers and locations calculated using the two approaches were able to accommodate the range expected by the consumers, but e-MCLP considered the investment costs and the distance between the PCC and SCC, unlike the MCLP. Therefore, MCLP provided a greater total PCC to SCC distance, which is indicated by 989.6 km with transportation costs of USD 131.76 when the collection is at only one PCC and the total cost required per year is USD 8426.27. Meanwhile, e-MCLP provided a shorter total distance of 40.9 km with 4.13% savings in transportation costs at USD 261.6 per year. e-MCLP is very suitable for PCCs with large waste volumes because vehicle capacity is filled faster when the volume of waste is large so that there are fewer pick-up points on one route. When there are fewer pick-up points in one route, the more routes there will be, and the development of this method is suitable for implementation. This model's savings in transportation costs will be felt when the number of routes increases because vehicles will depart and return to SCC more often. That is, the closer CCP distance to the SCC is very beneficial for the vehicle. In this study, the selected location does not affect the investment cost because each candidate location requires a procurement cost of the same amount. However, the developed model can accommodate each candidate location requiring a different investment cost. Later, the selected location will provide a minimum total cost, including investment and transportation costs. The best route was determined using the tabu search method to improve the results of the nearest neighbor. This is in line with the opinion found in [77,80,81] that metaheuristics are popular optimization problem-solving techniques to overcome the weaknesses of the heuristic method due to their ability to avoid being trapped in a local optimum solution [82]. The first route was found to be better than the original solution due to its ability to reduce the distance traveled by 15.6 km and the travel time by 0.33 h, thereby reducing the distance and travel time by 8%. Meanwhile, the optimal solution in the second route is the same as the initial solution, but the route completion sequence is reversed such that the first PCC visited using the nearest neighbor was the last in the tabu search method. This shows the nearest neighbor method also has the ability to provide the best solution, and this is in accordance with the findings of [41] that the nearest neighbor method produces the shortest route compared to other heuristic methods. The NN algorithm was able to minimize distribution costs [83] and could easily and quickly resolve problems for several small cities [84]. Furthermore, the initial solution was observed to be infeasible for the third route because the completion time, which was recorded to be 7.6 h, exceeds the planning time horizon, which is 7 h. The continuation of the iteration using the tabu search method changed the initially infeasible solution to feasible as indicated by the shortening of the completion time (CT) to 6.68 h with a total distance (D) of 225.5 km and a travel time (TT) of 5.01 h, saving 8.53% of travel time. This means the tabu search was able to reduce the distance and travel time by 15% as indicated by the 41.3 km and 0.92 h results when compared to the nearest neighbor method, as shown in Table 4. The tabu search method was generally able to provide better performance than the nearest neighbor method. The metaheuristic approach gives better performance results than the heuristic approach [70]. The results showed the possibility of collecting all the smartphone waste in Yogyakarta Province using three routes. This can be completed in a day through the use of three vehicles or in three days through the use of one vehicle. The total distance required to be covered is 602.2 km with a travel time of 13.4 h and a total completion time of 19.89 h. This means the tabu search method generally saved 56.9 km (8.6%) distance and 1.25 h (8.5%) travel time. Determination of smartphone waste collection routes in the province of Yogyakarta with one route picking up at several PCC points managed to save a mileage of 346.5 km compared to one route only picking up at one PCC point and a total of 30 pick-up points. If the smartphone waste collection is done once a week, this shorter distance can provide transportation cost savings of USD 2214.39 per year. The area of Yogyakarta Province is only 0.16% of the territory of Indonesia; if this model is implemented nationally, the estimated transportation cost savings will be more than USD 1 million. This research is expected to be the initial framework in formulating e-waste management policies for the national formal channel. If this proposal is successfully implemented in Yogyakarta, it is likely to be implemented in other provinces in Indonesia. The developed model can also be used for other solid waste collection scenarios. The proposed e-MCLP model is very suitable for large e-waste because there is no need to proceed to route determination, considering that the supply from PCC may already meet vehicle capacity. One trip only picks up from a PCC and then returns to the SCC again. However, to use the proposed model, it is necessary to consider whether the community is willing to bring their large size/volume e-waste to the provided PCC. This research is also the first step in electronic waste management, which will then be followed by the next stage of management, which includes separation, repair, recycling, remanufacturing, or disposal. Conclusions Smartphone waste has a high economic value and has great potential. The tendency of people to store and dispose of smartphone waste is due to the absence of waste collection facilities and government regulations that specifically regulate electronic waste management mechanisms. With the public's willingness to bring smartphone waste to a collection point with a maximum reach of 11.2 km and the benefits that will be obtained, this is a challenge and an opportunity for the government to design an optimal collection channel. The design of the collection channel involves consumers as suppliers of electronic waste, primary collection centers (PCCs), and Secondary Collection Centers (SCC). Due to the small area of Yogyakarta Province, 1 SCC is sufficient to accommodate the supply of smartphone waste from all selected PCCs. Based on the results of calculations using e-MCLP, as many as 30 PCCs should be built, with a distribution of 1 PCC in Yogyakarta City, 13 PCCs in Gunung Kidul Regency, 6 PCCs in Bantul Regency, 6 PCCs in Sleman Regency, and 4 PCCs in Kulon Progo Regency. e-MCLP can produce the minimum number of primary collection facilities required to cover all consumers with the shortest distance from secondary collection facilities to minimize total costs, including investment and transportation costs, with a total cost of USD 3617.92 per year. The best transportation route from PCC to SCC was determined using the nearest neighbor and tabu search method (NN-TB). The pick-up route starts and ends at SCC, and the result shows three routes to use in smartphone waste collection. These routes take three days to complete by using one vehicle or one day using three vehicles with a total time required of 19.89 h and a distance of 602.2 km. Further research can expand the study of e-waste with a large volume because the large volume will affect the willingness of consumers to bring their e-waste and the need to calculate the capacity of the collection center. This research is expected to be the initial framework in formulating e-waste management policies for a formal national channel. Research can also be continued with the design of management following the collection of e-waste in a final disposal site, such as separation, repair, recycling, remanufacturing, or disposal. Data Availability Statement: The data used to support the findings of this study are available from the corresponding author on request.
9,716
2021-08-09T00:00:00.000
[ "Environmental Science", "Engineering" ]
Evidence that superstructures comprise self-similar coherent motions in high Reynolds number boundary layers We present experimental evidence that the superstructures in turbulent boundary layers comprise smaller, geometrically self-similar coherent motions. The evidence comes from identifying and analysing instantaneous superstructures from large-scale particle image velocimetry datasets acquired at high Reynolds numbers, capable of capturing streamwise elongated motions extending up to 12 times the boundary layer thickness. Given the challenge in identifying the constituent motions of the superstructures based on streamwise velocity signatures, a new approach is adopted that analyses the wall-normal velocity fluctuations within these very long motions, which reveals the constituent motions unambiguously. The conditional streamwise energy spectra of the Reynolds shear stress and the wall-normal fluctuations, corresponding exclusively to the superstructure region, are found to exhibit the well-known distance-from-the-wall scaling in the intermediate-scale range. It suggests that geometrically self-similar motions are the constituent motions of these very-large-scale structures. Investigation of the spatial organization of the wall-normal momentum-carrying eddies, within the superstructures, also lends empirical support to the concatenation hypothesis for the formation of these structures. The association between the superstructures and self-similar motions is reaffirmed on comparing the vertical coherence of the Reynolds-shear-stress-carrying motions, by computing conditionally averaged two-point correlations, which are found to match with the mean correlations. The mean vertical coherence of these motions, investigated for the log region across three decades of Reynolds numbers, exhibits a unique distance-from-the-wall scaling invariant with Reynolds number. The findings support modelling of these dynamically significant motions via data-driven coherent structure-based models. Introduction and motivation Over the past two decades, the study of high Reynolds number (Re τ O(10 4 )) wallbounded flows has become synonymous with very-large-scale motions (VLSMs), also known as 'superstructures', which play a predominant role in the dynamics and spatial organization of wall turbulence.Here, Re τ = δU τ /ν, where δ is the boundary layer thickness, ν is the kinematic viscosity and U τ is the skin-friction velocity, with the latter two used to normalize the statistics in viscous units (indicated by superscript '+').The superstructures can extend beyond 20δ in the streamwise direction (Kim & Adrian 1999;Hutchins & Marusic 2007) and also exhibit 'meandering' when viewed on a wall-parallel plane (de Silva et al. 2015), particularly in the logarithmic region of the flow.Such a large spatial footprint permits these motions to carry significant proportions of the total turbulent kinetic energy and the Reynolds shear stresses of the flow (Liu et al. 2001;Guala et al. 2006;Balakumar & Adrian 2007).Given that the shear stress is responsible for the wall-normal momentum transfer, this suggests that the VLSMs/superstructures also contribute significantly to the high Re τ turbulent skin-friction drag (Deck et al. 2014).Hence, an improved understanding of the origin of these VLSMs/superstructures, towards which this study is directed, stands to advance our knowledge in both a fundamental and an applied perspective.Hutchins & Marusic (2007) used the terminology 'superstructures' when referring to the spectrogram of the streamwise velocity fluctuations (u) from a high Re τ boundary layer, as shown in figure 1.The spectrogram presents the premultiplied u-energy spectra as a function of the viscous-scaled streamwise wavelengths (λ + x = λ x U τ /ν) and wall-normal distance (z + = zU τ /ν), with λ x = 2π/k x , where k x is the streamwise wavenumber.The high Re τ u-spectrogram is seen to have two prominent peaks.One is located in the inner-region synonymous with the well-documented near-wall cycle (Kline et al. 1967), consisting of high and low-speed viscous-scaled streaks (λ + x ≈ 1000), which are responsible for intense local production of turbulent kinetic energy.The second peak is in the outer region of the flow (typically in the logarithmic/inertial region), and corresponds to the superstructures, which have a spectral signature at very long wavelengths (λ x ∼ 6δ) and also extend down to the wall (Hutchins & Marusic 2007).It is worth noting here that this second peak is only visible for Re τ 2000, owing to the insufficient separation of scales and weaker energy of the superstructures at lower Re τ (Hutchins & Marusic 2007).Between the innerand outer-peaks, a nominal plateau is seen in the spectrogram which corresponds to the distance-from-the-wall (z)-scaled eddies coexisting in the log-region; these eddies make up the increased range of scales with increasing Re τ .In the literature, these intermediate scaled eddies have been described by various structures or motions, including the large-scale motions (LSMs; Kim & Adrian 1999, Adrian et al. 2000), uniform momentum zones (UMZs; Meinhart & Adrian 1995, de Silva et al. 2016), attached eddies (Baars et al. 2017;Marusic & Monty 2019;Hu et al. 2020;Deshpande et al. 2021a) and so forth.In the remainder of this section, for simplicity, we will refer to these motions as LSMs.It should also be noted that the terminology 'VLSMs' and 'superstructures' have been conventionally associated with the very-large-scale motions in internal (Kim & Adrian 1999) and external wall-bounded flows (Hutchins & Marusic 2007), respectively.Considering this study focuses solely on zero-pressure gradient turbulent boundary layers, we henceforth refer to either of these structures simply as superstructures. To date, several studies have investigated the probable mechanisms responsible for the formation of superstructures, with two theories hypothesized most often: (i) the formation of superstructures via concatenation of the LSMs (Kim & Adrian 1999;Adrian et al. 2000;Lee & Sung 2011;Dennis & Nickels 2011), or (ii) the emergence of superstructures due to a linear instability mechanism (Del Alamo & Jimenez 2006;McKeon & Sharma 2010;Hwang & Cossu 2010).The present study does not focus on comparing and contrasting the likelihood of one mechanism over the other.Rather, it builds upon recent compelling evidence in support of the concatenation mechanism (Wu et al. 2012;Baltzer et al. 2013;Lee et al. 2014Lee et al. , 2019)), to investigate the characteristics of the constituent motions forming the superstructures.The formation of superstructures via streamwise concatenation of the relatively smaller motions has been confirmed by several studies conducted across all canonical wall-bounded flows (turbulent boundary layers, channels, pipes), through: (i) investigation of the time evolution of instantaneous flow fields (Lee & Sung 2011;Dennis & Nickels 2011;Wu et al. 2012;Lee et al. 2019), (ii) statistical analysis of the superstructure formation frequency/population density (Lee et al. 2014) and (iii) spatial correlations of the low-pass filtered velocity fields (Baltzer et al. 2013;Lee et al. 2019).In comparison, few studies have presented similar statistical arguments in favour of the linear instability mechanism.For instance, Bailey et al. (2008) supported the linear instability argument by noting different spanwise widths of the superstructures and LSMs in the inertial region of a turbulent pipe flow.Their estimates, however, were limited to two-point velocity correlations reconstructed in a particular wallparallel plane, which cannot be uniquely associated with the LSMs responsible for the superstructure formation (Deshpande et al. 2020).Considering that superstructures extend down from the log-region to the wall, Deshpande et al. (2021b) reconstructed two-point velocity correlations across two wall-parallel planes located in the near-wall and the log-region.These statistics, which are purely representative of the large 'wallcoherent' motions, revealed similar spanwise extents of the coexisting superstructures and LSMs for all canonical wall flows, thereby favouring the concatenation argument. Despite substantial support for the concatenation argument, several unanswered questions are still associated with this mechanism.For instance, there is no universal agreement on what facilitates the streamwise concatenation of LSMs to form superstructures.While few studies have associated this with the spanwise alternate positioning of low and high momentum LSMs (Lee et al. 2014), others have conjectured the role played by secondary roll cells (Baltzer et al. 2013;Lee et al. 2019) in favourably organizing the relatively smaller motions.Progress in this regard has been hindered by the lack of understanding of the constituent motions forming the superstructures; for instance, are superstructures purely composed of the inertial δ-scaled motions corresponding to the extreme right end of region II in figure 1? Or do they also comprise of the geometrically self-similar, i.e. z-scaled hierarchy of eddies encompassing the entirety of region II?The present study aims to answer these questions by analyzing the characteristics of the constituent motions. In the past, clarifying such information on the constituent motions has not been possible due to the low to moderate Re τ ( 2000) of the experiments/simulations analyzing the concatenation argument, which severely constricts the extent of region (II) in figure 1.This prevents an unambiguous delineation between the δ-scaled and z-scaled inertial motions coexisting in region II.Further, the statistical signature of the superstructures is also very weak at these Re τ (Hutchins & Marusic 2007), making it challenging to identify and isolate them from the other motions in the flow.However, increased access to high Re τ data over the past decade has substantially increased our knowledge of these inertial eddies coexisting in the log and outer regions (Marusic et al. 2015;Baidya et al. 2017;Deshpande et al. 2021a).This has also led to growing acceptance of the existence of the geometrically selfsimilar attached eddy hierarchy in the inertial region (de Silva et al. 2016;Baars et al. 2017;Hwang & Sung 2018;Hu et al. 2020;Deshpande et al. 2020Deshpande et al. , 2021a)), which can be modelled conceptually (Marusic & Monty 2019).These advancements make it compelling to investigate whether these self-similar inertial motions are associated with the formation of superstructures, a conjecture that has previously shown promising results when tested for low Re τ channel flows (Lozano-Durán et al. 2012), and when implemented in coherent structure-based models (Deshpande et al. 2021b).If this conjecture is proven true, then the preferred streamwise alignment of this energy-containing hierarchy of motions (to form superstructures) would have implications on Townsend's attached eddy hypothesis, which otherwise assumes a random distribution of attached eddies in the flow field (Townsend 1976;Marusic & Monty 2019).The investigation can also help answer the long-standing contradiction (Guala et al. 2006;Balakumar & Adrian 2007;Wu et al. 2012) between: (i) the attached eddy hypothesis, which classifies turbulent superstructures to be 'inactive' (Deshpande et al. 2021a), and (ii) instantaneous flow field observations, per which Focus on Fluids articles must not exceed this page length these streamwise elongated motions carry significant Reynolds shear stresses (and hence behave as 'active' motions). To this end, the present study investigates the geometric scalings exhibited by the constituent motions of the superstructures.Experimental data is employed from a moderate to high Re τ turbulent boundary layer (2500 Re τ 7500), which is an order of magnitude higher than the simulation studies reported previously, to ensure coexistence of a broad range of inertial scales (region II).The dataset comprises of sufficiently resolved large-scale velocity fluctuations acquired in a physically thick boundary layer via unique, large field-of-view (LFOV) particle image velocimetry (PIV), capturing instantaneous flow fields with an extent of 12δ in the streamwise direction (x).In contrast to most studies to date, which have investigated the superstructures by analyzing the large-scale u-fluctuations, here we adopt a unique strategy to investigate the wall-normal (w) velocity fluctuations within the superstructure region.This is because deciphering smaller constituent u-motions from within a larger u-motion can be inconclusive, as can be noted from a sample DNS flow field shown in figures 2(a,b).On the other hand, the wfluctuations can bring out the individual constituent motions more distinctly, which is evident from figure 2(c) and will be analyzed here by computing conditional statistics.It can be noted from figures 2(a-c) that the individual w-eddies within the region associated with a long u-motion are much smaller in streamwise extent (than u), and exhibit sort of a clustered/packed organization plausibly leading to the appearance of a u-superstructure.This scenario is recreated in figure 2(d,e), using an idealized distribution of prograde vortices, which suggests the possibility of strong u-as well as w-correlations extending across large streamwise separations.Such a flow organization, which adds further credibility to the streamwise concatenation hypothesis, will be investigated here via conditional statistics from high Re τ data.3a).Terminology has been defined in §2.∆x + and ∆z + indicate viscous-scaled spatial resolution along x and z directions, respectively. It is important to note that in the present study, any reference to concatenation henceforth refers to the spatial organization of constituent motions over extended streamwise distances, such as in figures 2(d,e).Given the experimental limitations, the study cannot directly comment on the dynamics/mechanism behind how this spatial organization comes into existence.Also, the terminology 'attached eddies' is used here to refer to any eddies/motions scaling with their distance from the wall, and hence is not limited to the eddies physically extending to the wall. Experimental datasets and methodology 2.1.Description of the experimental datasets Five multipoint datasets are used from previously published high Re τ experiments (table 1).Four of these are acquired via two-dimensional (2-D) two-component PIV in the Melbourne wind tunnel (HRNBLWT; Marusic et al. 2015) and span the Re τ range ∼ 2500-14500.The test section of this wind tunnel has a cross-section of 0.92 m × 1.89 m, and has a large streamwise development length of ∼27 m, with maximum possible free-stream speeds (U ∞ ) of up to 45 ms −1 .Such a large-scale facility permits the generation of a sufficiently high Re τ canonical boundary layer flow facilitated by substantial increment in its boundary layer thickness, along its long streamwise fetch.This capability is leveraged in the four PIV datasets employed in the present study, which will be described next. Three of the PIV datasets comprise snapshots of very large streamwise wall-normal flow fields of a turbulent boundary layer (x × z ∼ 12δ × 1.2δ), and are thus henceforth referred to as the large field-of-view (LFOV) PIV datasets (de Silva et al. 2015(de Silva et al. , 2020)).To the best of the authors' knowledge, this is the only published lab-based dataset giving access to sufficient LFOV instantaneous flow fields at Re τ 5000 (to achieve statistical convergence), thereby making the analysis presented in this paper unique as well as ideally-suited for investigating turbulent superstructures.The LFOV is made possible by stitching the imaged flow fields from eight high-resolution 14 bit PCO 4000 PIV cameras, each with a sensor resolution of 4008 × 2672 pixels.Figure 3(a) shows a schematic of the experimental setup for the LFOV PIV, where the region shaded in orange indicates the individual FOVs combined from the eight cameras.These measurements were conducted at the upstream end of the test section, with the LFOV starting at x ≈ 4.5 m from the start of the test section.The experiments were conducted at three free-stream speeds (U ∞ ≈ 10, 20 and 30 ms −1 ), which led to a corresponding variation in Re τ of 2500, 5000 and 7500, respectively.Here, U τ and δ used to estimate the flow Re τ , were computed at the middle of the LFOV, using the method outlined in Chauhan et al. (2009).The boundary layer thickness is nominally δ ≈ 0.11 m for all three Re τ cases. Considering the focus of the experiment was on a LFOV, a homogeneous seeding density was ensured across the entire test section of the tunnel for these measurements, and the particles were illuminated by a Big Sky Nd-YAG double pulse laser (∼1 ṁm thickness), delivering 120 mJ/pulse.The last optical mirror to direct this laser sheet was tactically placed within the test section (figure 3a), for ensuring adequate laser illumination levels across the LFOV.This optic arrangement, however, was sufficiently downstream of the PIV flow field and introduced no adverse effects (such as blockage, etc) on the measurement (de Silva et al. 2015).Figures 3(b,c) gives an example of the viscous-scaled u-and w-fluctuations estimated from the LFOV PIV experiment at Re τ ≈ 2500, which successfully captures a turbulent superstructure (of length L x ), as highlighted by a dashed green box in the u-field.Analysis on such a dataset not only avoids uncertainties due to Taylor's hypothesis approximation (Dennis & Nickels 2008;del Álamo & Jiménez 2009;Wu et al. 2012), but also permits identification of these superstructures directly from an instantaneous flow field of a high Re τ boundary layer (where superstructures are statistically significant).The latter represents another unique feature of the present study, and overcomes the limitations experienced by past experimental studies (Liu et al. 2001;Guala et al. 2006;Balakumar & Adrian 2007), which were restricted to isolating superstructure characteristics based on Fourier-filtering, or Proper orthogonal decomposition (POD)-based decomposition of ensemble/time-averaged statistics.The accuracy of these LFOV PIV datasets have been firmly established in appendix 1 ( §5), which compares the premultiplied 1-D spectra obtained from the present data, with those acquired via multiwire anemometry published previously (Morrill-Winter et al. 2015;Baidya et al. 2017).Readers can also refer to the same appendix section for details associated with the computation of the velocity spectra from PIV flow fields, which is relevant to the analysis presented ahead in the paper. The fourth and final PIV dataset comprises of relatively smaller flow fields in the x-z plane (in terms of δ-scaling), and is hence referred to as simply the PIV dataset.This was acquired at U ∞ ≈ 20 ms −1 , close to the downstream end of the test section (x ≈ 21 m from the trip), where δ ≈ 0.3 m, yielding a high Re τ ≈ 14500.The full velocity field captured in this experiment was also made possible by using the same eight PCO 4000 cameras, arranged in two vertical rows of four cameras each, to capture the significantly thicker boundary layer (refer to figures 1-2 of de Silva et al. 2014).This limits the streamwise extent of the flow field to x ∼ 2δ in this case, and is hence not used for identifying the turbulent superstructures in instantaneous fields, but rather used to compute the two-point correlations of u-and w-fluctuations along the z-direction (limited to the inner-region).It is owing to this reason that only a part of the full flow field (x × z ∼ 2δ × 0.4δ), from this dataset, has been considered in the present study.The image pairs from all four PIV datasets were processed via an in-house PIV package developed by the Melbourne group (de Silva et al. 2014), with the final window sizes (∆x + ,∆z + ) used for processing given in table 1. Interested readers may refer to the cited references for further details about the experimental setup and methodology adopted for acquiring these datasets. The fifth dataset, which is at the highest Re τ ∼ O(10 6 ), was acquired at the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in the salt flats of western Utah.The data is acquired from a spanwise and wall-normal array of 18 sonic anemometers (Campbell Scientific CSAT3) arranged in an 'L'shaped configuration (refer to figure 1 of Hutchins et al. 2012).While the full dataset comprises of continuous measurements of all three velocity components as well as the temperature at the SLTEST site over a duration of nine days, here we limit our attention solely to one hour of data associated with near-neutral (i.e.near canonical) atmospheric boundary layer conditions (Hutchins et al. 2012).These conditions were confirmed based on estimation of the Monin-Obukhov similarity parameter, determined on averaging across the 10 sonic anemometers placed along the spanwise array, at a fixed distance from the wall (z ≈ 2.14 m).For the present analysis, we are solely interested in the u-and w-fluctuations measured synchronously by the nine sonic anemometers on the wall-normal array, which were placed between 1.42 m ≤ z ≤ 25.69 m with logarithmic spacing.Mean streamwise velocity measurements reported by Hutchins et al. (2012) confirm that all these z-locations fall within the log-region of the atmospheric boundary layer.This data is also used here to compute the two-point correlations of u-and w-fluctuations along the z-direction, for comparison with those obtained from the PIV datasets acquired in the laboratory. Methodology employed to identify and extract turbulent superstructures In the present study, we are interested in computing conditional statistics of the velocity fluctuations associated with the superstructures, identified from the individual flow fields in the LFOV PIV dataset.To identify these structures, we need to first define what we mean by a superstructure, for which we draw inspiration from past studies that have investigated these motions based on 3-D instantaneous flow fields (Hutchins & Marusic 2007;Dennis & Nickels 2011;Lee & Sung 2011).Those studies, as noted by Smits et al. (2011), refer to superstructures as "very long, meandering, features consisting of narrow regions of low-streamwise-momentum fluid flanked by regions of higher-momentum fluid", that "have also been observed in the logarithmic and wake regions of wall flows."Here, for the purpose of analyzing 2-D velocity fields, we define superstructures as very large-scale motions that persist spatially with coherent regions of streamwise velocity, and account for a significant fraction of the streamwise turbulent kinetic energy.Identifying these structures from the PIV field, hence, requires establishing logical thresholds to the geometric and kinematic properties of the fluctuating u-field (Hwang & Sung 2018;de Silva et al. 2020).For this, we consider previous findings and adopt the following thresholds: (Liu et al. 2001), where u 2 (z) is the root-mean-square of the u-fluctuations at z. (iii) wall-normal extent should at least span across 2.6 √ Re τ z + 0.5Re τ (Guala et al. 2006;Hutchins & Marusic 2007;Balakumar & Adrian 2007;Deshpande et al. 2021b).In the process of identifying a superstructure, the threshold associated with the streamwise turbulent kinetic energy (i.e.(i)) is considered first before applying thresholds associated with the geometric extent ((ii) and (iii)).With regards to criteria (ii), we acknowledge that past studies investigating 3-D instantaneous flow fields (Hutchins & Marusic 2007;Lee & Sung 2011;Dennis & Nickels 2011) have found superstructures to be as long as 10-20δ.However, statistical analysis based on 1-D one-/two-point correlations (Guala et al. 2006;Hutchins & Marusic 2007;Balakumar & Adrian 2007;Deshpande et al. 2021b) suggests these structures have relatively modest lengths (on average), between 3-6δ.Considering that the present analysis is also limited to 2-D flow fields, we adapt the estimates from past statistical analyses and consider u-structures with streamwise extent, L x > 3δ as superstructures.Figure 3(d) gives an example of a -u superstructure identified and extracted by the algorithm (u| SS ), based on the aforementioned thresholds from the full flow field depicted in figure 3(b) (highlighted by the dashed green box).Streamwise extent/length of the identified structures (L x ) is judged based on the length of a rectangular bounding box (along x) that fully encompasses the identified structure.Our superstructure identification algorithm extracts the rectangular 2-D flow field within this box to conduct further conditional analysis associated with the superstructures.Although the choice of a rectangular box inevitably also brings in some part of the flow not associated with a superstructure, it only forms a minor part (∼20%) of the bounding box, suggesting conditional statistics can be predominantly associated with the superstructures.Interested readers are referred to the supplementary document provided along with this manuscript, which provides a step-by-step description of the superstructure identification and extraction procedure from a 2-D PIV flow field. Besides identifying a superstructure, which is indicated by a dashed green box in figures 3(b,c), the algorithm also identifies a region of same length×height as the green box but not associated with a superstructure (u| noSS ).The u| noSS flow field region is allocated by the algorithm in the same wall-normal range as u| SS , but in a different streamwise location within the PIV image that does not satisfy criteria (i-ii) defined above, thereby ensuring it doesn't overlap with u| SS .This practice of extracting u| noSS , from the same PIV fields used to extract u| SS and of the same size as that of u| SS , is conducted across all three LFOV datasets to form a set of u| noSS and u| SS of equal ensembles.Conditional statistics are computed and compared from both u| SS and u| noSS , with the latter considered to confirm that the trends depicted by the former are not an artefact of aliasing or insufficient ensembling/noise.The superstructure extraction algorithm described above, identified superstructures of both +u and -u signatures, of varying lengths, from the three LFOV PIV datasets.A summary of their streamwise extents is presented in the form of a probability distribution function (pdf ) plot in figure 4. The plot is obtained by sorting the identified u-motions into bins of width 0.5δ (between 3.0:0.5:6.0), based on their respective lengths (L x ).The population associated with each bin is then normalized by the total number of −u and +u superstructures identified by the algorithm (for L x > 3δ), which is then plotted in the figure.It can be noted from the plots that the pdf s do not change significantly with Re τ for structures of lengths, L x < 5δ.It is only when L x is increased significantly (> 5δ) that notable differences appear for different Re τ .For example, no −u or +u-structures are identified in certain PIV datasets while in others, the probability is low.Further, the logarithmic scaling of the vertical axis of the plots reveals that the population density decreases near exponentially as the criteria (ii) to identify a superstructure (i.e.minimum length, L x ) is increased.The effect of increasing the minimum streamwise extent of a ustructure to qualify as a superstructure, on the conditionally averaged statistics, has been documented in figure 15 in Appendix 2 ( §5).Given that an increase in L x does not change the scaling behaviour, but significantly reduces the convergence of the x , computed from the LFOV PIV dataset at various Reτ .Dash-dotted golden and magenta lines represent the relationships λx ≈ 2z and λx ≈ 15z, respectively following Baidya et al. (2017).(c) Schematic of representative w and uw-carrying eddies centred at various distances from the wall (zr) in the log region, with light to dark shading used to suggest an increase in zr.Rww(z/zr) and Ruw(z/zr) respectively represent the vertical coherence of the w-and uw-carrying eddies centred at zr. conditioned statistics (due to fewer ensembles), reinforces the choice of L x 3δ in criteria (ii) discussed above. Mean statistics Before investigating the conditionally averaged statistics associated with the superstructures, it is worth revisiting the scaling behaviour of the mean statistics, against which the former would be compared.Here, the mean statistics have been obtained by averaging across all 3000 flow fields, and considering the entire 12δ long flow fields in case of the LFOV PIV datasets.In the present study, since we are primarily interested in the w-velocity behaviour associated with superstructures, we investigate the mean spatial coherence of the w-carrying eddies in the log-region of a high Re τ boundary layer.We look at the spatial coherence in both the streamwise (figure 5) as well as wall-normal direction (figure 6), for both the w-fluctuations and the Reynolds shear stress (uw).Previous investigations on the vertical coherence have been rare compared to the streamwise coherence, particularly for the log-region of a high Re τ boundary layer, owing to the lack of large-scale PIV experiments of the kind utilized here.This makes the present investigation (figure 6) unique by itself. Figures 5(a,b) depict the iso-contours of the premultiplied spectrogram of the w-velocity and the Reynolds shear stress respectively, computed from the three LFOV PIV datasets.These are plotted as a function of λ + x and z + .The iso-contours for the w-velocity spectrograms can be seen centred around the linear (z-)scaling indicated by λ x = 2z for all Re τ , which is consistent with previous observations in the literature (Baidya et al. 2017).Similarly, the iso-contours for the Reynolds shear stress spectrograms also follow a linear scaling (λ x = 15z) for all Re τ , again consistent with the literature (Baidya et al. 2017).This analysis not only validates the spectra estimated from the LFOV PIV, but also assists with the construction of a simplified 2-D conceptual picture of the w-and uw-carrying eddies in the log-region of a high Re τ boundary layer (figure 5c).Here, based on the z-scaling exhibited by the data, the lengths (λ x ) of the w-and uw-carrying eddies have been defined as 2z r and 15z r respectively, where z r represents the distance of the eddy centre from the wall.This scaling confirms the association of these w-and uw-carrying eddies with Townsend's attached eddy hierarchy, according to which attached eddies scale with z r (Townsend 1976;Baidya et al. 2017;Deshpande et al. 2021a). While both these linear scalings, which represent the streamwise coherence of the w-and uw-carrying eddies, are well accepted in the literature, not much is known about the vertical/wall-normal coherence of the same eddying motions at high Re τ .Several previous studies (Comte-Bellot 1963;Tritton 1967;Sabot et al. 1973;Hunt et al. 1987;Liu et al. 2001;Sillero et al. 2014) have investigated their vertical coherence in low Re τ canonical wall flows via traditional two-point correlations, providing interesting insights on their scaling.Here, we are inspired by one such interesting result reported in the seminal work of Hunt et al. (1988), based on high Re τ unstably stratified atmospheric boundary layer data, who found the two-point correlation coefficients given by: to be a function of (z/z r ).Here, z r acts as the reference wall-normal location fixed in the log-region, thereby making R ww (or R uw ) representative of the vertical coherence of the eddy centred at z r .It should be noted here that these correlation functions are different from the conventionally used two-point correlations (which consider normalization by the root-mean-square of velocity at both z and z r ), and hence their values aren't restricted between -1 and 1. Equations (3.1), however, are ideally suited for the present study, which tests the self-similarity (i.e.z-scaling) of the vertical coherence of the momentum carrying eddies.We compute these correlations for the four high Re τ boundary layer datasets considered and plot them in figure 6, for various z r restricted to the log-region.It can be clearly observed that R uw curves for varying z r and Re τ collapse over one another (represented by a line in teal colour based on least-squares fit), suggesting Re τ -invariance via z-scaling of the vertical coherence of uw-carrying motions.On the other hand, the collapse in the R ww curves is not as good for the relatively low Re τ cases (< 7500), but certainly gets better for the very high Re τ atmospheric data (figure 6d).This case has a significantly thicker log-region than the boundary layers generated in the lab, suggesting the influence of the wall behind the relatively poor collapse of R ww at low Re τ .Accordingly, the z-scaling of the R ww curves has been represented by the golden lines in figures 6(a-d) (obtained by a least-squares fit), which are consistent with R ww curves in figure 6(d), as well as R ww estimated farthest from the wall (z + r ≈ 0.2Re τ ) in figures 6(a-c).The analytical expressions associated with these golden and teal lines are: (3.2) The fact that R ww and R uw are solely a function of z/z r represents geometric self-similarity in the vertical coherence of the w-and uw-carrying inertial eddies, reaffirming their association with Townsend's attached eddies.The analytical forms in (3.2) can thus be used in data-driven coherent structure-based models (Deshpande et al. 2021b) to simulate high Re τ boundary layers (such as atmospheric surface layers).It is worth noting that the collapse in the R uw and R ww curves, observed in figure 6, does not exist for w-and uw-carrying eddies centred far outside the log-region of the boundary layer (i.e.z r > 0.2δ; not shown here), which may be due to the growing influence of the turbulent/non-turbulent interface in the outer-region (de Silva et al. 2014).Investigations for z r below the log-region, however, were not possible owing to insufficient data points captured by the LFOV PIV. Conditionally averaged statistics associated with superstructures With the scaling behaviour of the mean statistics established in §3, we progress next towards analyzing the conditionally averaged statistics (spectra and correlations) associated with superstructures.Figure 7 plots the conditionally averaged, premultiplied u-spectra computed from the extracted flow fields associated with superstructures (figure 3d), from the three LFOV PIV datasets.The spectra are plotted for z + ≈ 2.6 √ Re τ and 0.5Re τ , and estimated individually from the extracted flow fields associated with low-momentum (k x φ + uu | −u ss ; in blue) and high-momentum superstructures (k x φ + uu | +u ss ; in red).Also plotted is the conditionally averaged spectra considering both -u ss and +u ss (k x φ + uu | −u ss ,+u ss ; in green), which is compared against the mean u-spectra shown in figures 14(a-c).A noteworthy observation from the conditionally averaged spectra (k x φ + uu | −u ss ,+u ss ) is the enhanced largescale energy (λ + x 10 4 ) seen for all three Re τ cases.These enhanced energy levels are due to the significant streamwise turbulent kinetic energy associated with the superstructures, which is captured in the extracted flow fields and averaged across fewer ensembles, than those used for obtaining the mean spectra.To confirm that (c,f) 7500.Dashed black lines correspond to the mean spectra obtained by ensembling across 3000 PIV images of the full flow field.While, the solid blue and red lines represent conditional spectra computed from the u-flow field extracted based on identification of a -uss and +uss, respectively.The spectra in green is computed by ensembling across both -uss and +uss.The mean spectra estimated from the full flow field (in black lines) is ensembled across all 3000 fields.While, the conditional spectra corresponds to extracted flow fields (∼ 300) of the same length×height associated (in green) and not associated (in brown) with the superstructures.these trends are not an artefact of aliasing or ensembling, figure 8 compares the conditionally averaged spectra associated with superstructures (green boxes in figure 3c) with that not associated with the superstructures (brown boxes in figure 3c).Given that both the conditional spectra are estimated from the same number of extracted flow fields, of the same length×height, the enhanced energy in the largest scales for k x φ + uu | −u ss ,+u ss (compared to k x φ + uu | noSS ) can be unambiguously associated with the turbulent superstructures.These trends give us confidence regarding the efficacy of the superstructure extraction algorithm.Also, they indicate that the scalings observed from the conditionally averaged u-, w-statistics can be associated with the constituent motions of superstructures.This is one of the advantages of analyzing very-large-scale motions based on extraction of instantaneous flow fields (present study), as compared with the much simpler approach of Fourier filtering (past studies). Another interesting observation from the conditional spectra for low-and highmomentum motions, k x φ + uu | −u ss and k x φ + uu | +u ss , is their starkly different behaviour in the lower portion of the log-region (figures 7a-c) and outside of it (figures 7d-e). While ensemble-averaged co-spectra (k x φ + uw ) at various z + within the log-region.Similar to that noted for the w-spectra, the z-scaling observed in the ensemble-averaged cospectra (λ x = 15z; figure 5b) is also noted for the k x φ + uw | −u ss ,+u ss , confirming our claim that the self-similar motions coexist within the superstructure region.This comparison between k x φ + uw | −u ss ,+u ss and k x φ + uw also showcases the significance of analyzing the very-large-scale motions by extracting instantaneous flow fields, than using pure Fourier filtering.While the latter is simpler to execute, it doesn't present the 'full physical picture' associated with the very-large-scale motions.It is only after extraction of the instantaneous flow fields at high Re τ that the present study can confirm the z-scaling characteristics associated with the constituent motions of the superstructures (k x φ + uw | −u ss ,+u ss ).In figure 10, again, high energy levels can be noted in k x φ + uw | −u ss ,+u ss at very large λ x , the magnitude of which is much greater than the energy levels for k x φ + uw | noSS and k x φ + uw at the same λ x .Further analysis is presented in §4.1 to reaffirm that these peaks do not represent very-large-scale w-motions existing in the physical flow field. While the conditional 1-D spectra brings out the geometric characteristics of the constituent motions along the streamwise direction, the former can be understood for the wall-normal direction by computing the two-point correlations (R ww ; (3.1)) for the extracted flow fields.Figure 11 plots R ww | −u ss ,+u ss , i.e. the two-point correlations computed from the w-fluctuations associated with both -u ss and +u ss , for z r limited to the log-region.These are estimated for all three LFOV PIV datasets and compared with the least-squares fit (given by (3.2)) estimated from the mean statistics (plotted with a golden line).Consistent with our observations based on the mean statistics in figure 6, the collapse in the R ww | −u ss ,+u ss curves is not very good at low Re τ but improves significantly at Re τ ≈ 7500.Interestingly, however, R ww | −u ss ,+u ss curves estimated at all Re τ are close to the empirically obtained least-squares fit.Hence, investigation of the vertical coherence of the w-carrying eddies (associated with superstructures) also indicates that the geometrically self-similar eddies coexist within superstructure region, in line with interpretations based on figures 9 and 10.While the present study lacks the analysis to investigate the spanwise coherence of the constituent motions, consideration of the present findings in light of the recent knowledge on the log region & Sung 2018; Deshpande et al. 2020Deshpande et al. , 2021a,b) ,b) suggests that they likely exhibit self-similar characteristics along span as well.Notably, Deshpande et al. (2020Deshpande et al. ( , 2021b) ) found that the spanwise extent of the wall-coherent, intermediate-scaled motions (λ x 4δ) varies self-similarly with respect to their streamwise extent, which directly corresponds to the scale-range associated with the constituent motions of the superstructures. Physical interpretations and discussions on the conditionally averaged statistics Here, we discuss the physical interpretation of the conditionally-averaged spectra presented in figures 7-10, and how it advances our understanding of the constituent motions forming the turbulent superstructures.Given the geometry of individual w-eddies does not physically conform with the very-large-scale peaks noted in k x φ + ww | −u ss ,+u ss and k x φ + uw | −u ss ,+u ss (discussed previously based on figures 2c,e), these peaks are likely an artefact of the preservation of the covariance tensor, which is a property of the Fourier transform.However, the non-zero correlation between u and w-fluctuations, at large λ x , has often been misinterpreted to be representative of instantaneous w-features physically as long as the superstructures (as also highlighted by Lozano-Durán et al. 2012 andSillero et al. 2014), especially when one analyzes it from the perspective of the structure parameter (∼O(1) for large λ x ).Here, we prove from our analysis that this interpretation is incorrect.If one observes w| ss plotted in figure 3(e), which is conditioned with respect to a −u superstructure, it is clear there are no long and energetic w-features extending beyond 3δ.To the best of the authors' knowledge, energetic w-features of such long streamwise extents have never been noted in instantaneous flow fields, and their absence can also be confirmed from the negligible energy in the 1-D w-spectra plotted in figure 5(a), or in the literature (Baidya et al. 2017).Absence of very-long ( 3δ) w-features also means there are no very-large-scale Reynolds shear stress-carrying motions in the instantaneous flow (Lozano-Durán et al. 2012;Sillero et al. 2014).Such misinterpretations are the source for the long-standing contradictions between the attached eddy hypothesis and past studies (Guala et al. 2006;Balakumar & Adrian 2007;Wu et al. 2012) investigating the Reynolds shear stress co-spectra (refer §1), which we attempt to clarify here. To reaffirm that the very-large-scale peaks in k x φ + ww | −u ss ,+u ss do not correspond with very long and energetic w-features in the instantaneous flow, figure 12 analyzes the streamwise extent of w-eddies (L w x ) in the extracted w| SS and w| noSS fields.For this analysis, the same algorithm is deployed to identify and characterize the weddies, as used to identify and extract superstructures in the u-field (refer §2.2 and the supplementary document).Figures 12(a,b) represent the same w| SS and w| noSS fields as in figures 3(e,g), but only consider motions with strong fluctuations (i.e.|w SS |, ).This threshold is based on Dennis & Nickels (2011) and assists with identification and extraction of individual, energetic w-eddies.Figures 12(c,d) present the probability distribution functions of the streamwise extents of the w-eddies identified within w| SS and w| noSS flow fields, extracted across all three PIV datasets.The pdf s confirm that the streamwise extent of w-eddies is limited to 3δ across both w| SS and w| noSS .This wavelength range closely corresponds with the geometrically self-similar hierarchy of eddies exhibiting distance-from-thewall scaling in figure 9, reaffirming the key finding of this study, based on direct analysis of the physical flow field.Although not shown here, a similar analysis on the Reynolds shear stress-carrying eddies also yields the same conclusion, reinforcing our earlier statements on the interpretation of k x φ + uw | −u ss ,+u ss .The analysis also confirms that energetic w-eddies do not physically extend along x, as long as the superstructures (> 3δ), meaning the only possible way of observing a physically long w-feature is when the individual w-eddies align along the x-direction.Indeed, the w| SS flow field indicates a much more closely-packed/clustered organization of the individual w-eddies, compared to w| noSS , in the x-direction (figure 12a,b).However, since the present analysis uses snapshot 2-D PIV data, this study cannot definitively comment on the dynamics associated behind the formation of superstructures.But, the conditional analysis presented in this section does lend empirical support in favour of the formation of superstructures, via streamwise concatenation of the intermediate-scaled eddies (Adrian et al. 2000).Interested readers are referred to the supplementary document, where we have utilized a simplified coherent structurebased model (i.e. the attached eddy model), to demonstrate a statistically plausible scenario of self-similar eddies aligning in x to 'form' a superstructure. The present results are also consistent with the conclusions of Lozano-Durán et al. (2012), who found large-scale Reynolds shear stress-carrying structures to be essentially a concatenation of smaller uw-carrying eddies, having lengths ∼3 times their height.This also clarifies the contradiction in the literature on the 'active'/'inactive' status of the very-large-scale u-motions (i.e.superstructures).Given there are no very-large-scale w-(and consequently uw-) features in the instantaneous flow, the superstructures are indeed inactive as per the definition of Townsend (Deshpande et al. 2021a).Present evidence indicates that the superstructures comprise of several z-scaled w-carrying (i.e.active) motions, which explains the past empirical observations of superstructures carrying significant Reynolds shear stress. Concluding remarks The present study analyzes large-scale PIV datasets, acquired in moderate to high Re τ turbulent boundary layers (2500 Re τ 7500), to investigate the constituent motions of the turbulent superstructures.Considering that superstructures are statistically significant only at Re τ 2000 (Hutchins & Marusic 2007), the present datasets (providing sufficient scale separation) are ideally suited to identify superstructures and analyze their constituent motions.These unique datasets accurately capture the inertia-dominated instantaneous u-and w-fluctuations across a large streamwise wall-normal plane, extending up to 12δ in the x-direction.This facilitates a comprehensive investigation of the horizontal (via 1-D spectra) as well as vertical coherence (via two-point correlations) of the Reynolds shear stress-carrying eddies coexisting in the log-region, which are responsible for the momentum transfer in a high Re τ boundary layer (Baidya et al. 2017;Deshpande et al. 2021a).The statistics bring out the geometric self-similarity of these energetically significant eddies, which complements the well-established knowledge on the self-similarity exhibited by the wall-parallel velocity components in a canonical flow (Baars et al. 2017;Hwang & Sung 2018;Deshpande et al. 2020).We note that this motivates undertaking similar investigations of the momentum and heat flux in thermally stratified wall-bounded flows at high Re τ (for example atmospheric boundary layers), which can likely assist with coherent structure-based modelling of these practically relevant flows.The empirically derived scaling behaviour observed from these mean statistics (spectra and correlations) provide a benchmark for comparing and contrasting with the conditionally averaged statistics, associated with the turbulent superstructures.Such conditional statistics are made possible by the large-scale PIV flow fields, which permit identification of the superstructures directly from instantaneous flow fields.These statistics present a comprehensive picture of the superstructures, in comparison to the limited information available based on modal decompositions, used often in past studies (such as Fourier filtering, etc.).Considering the ambiguity involved while interpreting the smaller constituent motions from a u-flow field, the present study adopts the approach of investigating the w-fluctuations within the superstructure region, to understand its constituent motions.Notably, the conditional streamwise wand uw-spectra exhibit the classical z-scaling (λ x = 2z; λ x = 15z) in the intermediate scale range (Baidya et al. 2017), clearly suggesting that geometrically self-similar eddies co-exist within the superstructure region (represented schematically in figure 13).The same conclusion is demonstrated through the conditional two-point wcorrelations, along the vertical direction, which also exhibit self-similar scaling similar to that noted for the mean flow.Investigations of these kinds are only possible on analyzing instantaneous flow fields, highlighting the uniqueness of the present largescale high Re τ PIV dataset. The argument regarding the self-similar motions, as the likely constituent motions of the turbulent superstructures, is reaffirmed by analyzing the geometry and population of individual w-eddies associated with these very-large-scaled structures.The maximum streamwise extent of the energetic w-eddies was found limited to 3δ within the superstructures, similar to that noted outside a superstructure, and conforming to the self-similar hierarchy of scales.The same analysis also revealed the spatial organization of these constituent w-eddies within the superstructures, which is consistent with the streamwise concatenation argument of forming superstructures.This also helps clarify longstanding contradiction in the literature on the active/inactive behaviour of the superstructures (Guala et al. 2006;Balakumar & Adrian 2007;Wu et al. 2012).Since there are no very-large-scale w-and uw-features in the instantaneous flow, the superstructures are indeed inactive per the definition of Townsend (Deshpande et al. 2021a).However, the study finds that superstructures comprise of several z-scaled Reynolds shear stress-carrying (i.e.active) motions (Lozano-Durán et al. 2012), which explains the past empirical observations of these very-large-scaled motions carrying significant Reynolds shear stress. The present study concludes that superstructures are an assemblage of the attached eddy hierarchy in the streamwise wall-normal plane, hinting at a well-defined spatial organization of the attached eddies.This contradicts the original hypothesis of Townsend (1976), per which attached eddies are randomly distributed in the flow domain, suggesting the need to revisit the hypothesis (this has also been tested based on synthetic flow fields and presented in the supplementary document).The present empirical findings, specifically the Re τ -invariance of the vertical coherence of inertial eddies (R ww , R uw ), can also be used to further improve coherent structure-based models, such as the attached eddy model (Marusic & Monty 2019).This is possible through extending the data-driven approach proposed recently in Deshpande et al. (2021b), by defining the geometry of the representative eddies based on the leastsquares fits presented in (3.2).The present findings would also benefit the attached eddy model by acting as empirical evidence, to model superstructures as clusters of self-similar (attached) eddies, organized along the streamwise direction. Figure 1 : Figure 1: Premultiplied spectra of the streamwise velocity (kxφ + uu ) plotted against viscous-scaled wavelength (λ + x ) and distance from the wall (z + ) for a turbulent boundary layer at Reτ ≈ 7300 (Hutchins & Marusic 2007).× and marked in the plot correspond to the 'inner' and 'outer' peaks of the u-spectrogram noted previously in the literature.Regions (I), (II) and (III) are used to indicate spectral signatures of various coherent motions observed in the literature.Region (I) corresponds to the near-wall cycle captured via flow visualization by Prof. S. J. Kline (photo shared by Prof. D. Coles).Region (II) corresponds to the LSMs (conceptual sketch by Adrian et al. 2000), UMZs (particle image velocimetry (PIV) by de Silva et al. 2016) and attached eddies (attached eddy simulations by de Silva et al. 2016).Region (III) corresponds to the VLSMs/superstructures, visualized via time resolved PIV by Dennis & Nickels (2011). Figure 2 : Figure 2: Colour contours of the instantaneous (a,b) streamwise, u and (c) wall-normal velocity fluctuations, w in a boundary layer at Reτ ≈ 2000.This data has been extracted from a particular 3-D time block of the publicly available DNS dataset of Sillero et al. (2013).In (a), u is plotted on a wall-parallel plane at z ≈ 0.05δ, as well as on the cross-planes at x ≈ 2δ and 4δ.(b) and (c) respectively plot the u and w fluctuations in the streamwise wall-normal plane shaded in grey in (a).Dashed black line in (b,c) traces the top part of a long −u ramp type structure.(d,e) respectively plot an idealized distribution of u and w flow field induced by multiple prograde vortices (in green) positioned along the ramp (Adrian et al. 2000; de Silva et al. 2016). Figure 3 : Figure 3: Schematic of the experimental setup used to conduct LFOV PIV experiments in the streamwise wall-normal plane (x-z) in the HRNBLWT.Green shading indicates flow illuminated by the laser while the orange shading indicates the flow field cumulatively captured by the PIV cameras (shown in the background).Dash-dotted black line represents the streamwise evolution of the boundary layer thickness, with δ defined at the centre of the full flow field.(b,c) Instantaneous (b) u + and (c) w + -fluctuations from the LFOV PIV dataset at Reτ ≈ 2500.The dashed green box in (b,c) identifies a low-momentum turbulent superstructure (−uss) of length Lx based on the superstructure extraction algorithm described in §2.2.(d,e) show an expanded view of the u-and w-fluctuations within −uss, as identified in (b,c), respectively.Alternatively, the dashed brown box in (b,c) represents flow field of the same length×height as the dashed green box, but not associated with a turbulent superstructure (noSS).(f,g) show an expanded view of the u-and w-fluctuations within the noSS region identified in (b,c), respectively. Figure 4 : Figure 4: Probability density function (pdf ) of the lengths of the large and intense, (a) low and (b) high streamwise momentum motions detected by the superstructure extraction algorithm in PIV flow fields of various Reτ .Background shading indicates the bin sizes used to estimate the pdf, for which the total number of detected superstructures (i.e.addition of +uss and −uss) was used for normalization.Empty symbols indicate zero probability for the respective bin. Figure 5 : Figure 5: Iso-contours of the premultiplied streamwise 1-D (a) energy spectra of w-fluctuations and (b) co-spectra of the Reynolds shear stress plotted against z + and λ +x , computed from the LFOV PIV dataset at various Reτ .Dash-dotted golden and magenta lines represent the relationships λx ≈ 2z and λx ≈ 15z, respectively followingBaidya et al. (2017).(c) Schematic of representative w and uw-carrying eddies centred at various distances from the wall (zr) in the log region, with light to dark shading used to suggest an increase in zr.Rww(z/zr) and Ruw(z/zr) respectively represent the vertical coherence of the w-and uw-carrying eddies centred at zr. Figure 6 : Figure6: (a-d) Cross-correlation of w-fluctuations measured at z and zr, normalized by w 2 (zr) for various zr the log-region.(e-h) Cross-correlation between u(zr) and w(z), normalized by uw(zr) for the same zr as in (a-d). .(a,b,e,f) are estimated from the LFOV PIV datasets while (c,g) have been computed from the PIV case of the SLTEST dataset in zr listed in the legend corresponds to the 9 th , 8 th , 6 th and 4 th sonic positioned from the ground.Dashed green line corresponds to the linear relationship, z/zr while dash-dotted golden and teal lines correspond to the least-squares fit R a ww and R a uw defined in (3.2), respectively. Figure 7 : Figure 7: (a-f) Premultipled 1-D spectra of the u-fluctuations plotted versus λx/δ at (a-c) z + ≈ 2.6√ Reτ and (d-f) z + ≈ 0.5Reτ for LFOV PIV data at Reτ ≈ (a,d) 2500, (b,e) 5000 and (c,f) 7500.Dashed black lines correspond to the mean spectra obtained by ensembling across 3000 PIV images of the full flow field.While, the solid blue and red lines represent conditional spectra computed from the u-flow field extracted based on identification of a -uss and +uss, respectively.The spectra in green is computed by ensembling across both -uss and +uss. Figure 8 : Figure 8: Premultiplied 1-D spectra of the u-fluctuations plotted versus λx/δ at z + ≈ 2.6√ Reτ for Reτ ≈ (a) 2500, (b) 5000 and (c) 7500.The mean spectra estimated from the full flow field (in black lines) is ensembled across all 3000 fields.While, the conditional spectra corresponds to extracted flow fields (∼ 300) of the same length×height associated (in green) and not associated (in brown) with the superstructures. Figure 11 : Figure 11: (a-f) Conditionally averaged correlations between w-fluctuations at z and zr, normalized by w 2 (zr) for various zr.The correlations have been computed from the extracted w-flow fields associated with both -uss and +uss.Dashed black line corresponds to the linear relationship, z/zr while dashed dotted golden line corresponds to R a ww defined in (3.2). Figure 12 : Figure 12: (a,b) Examples of intense w-fluctuations (|wSS|, |wnoSS| > 1.3 w 2 (z)) present within flow fields associated (a) with superstructures (w|SS) and (b) not associated with superstructures (w|noSS).The w|SS and w|noSS flow fields used as examples in (a,b) essentially correspond to the extracted fields shown in figures 3(e,g).(c,d) Probability density function (pdf ) of the lengths (L w x ) of intense (c) w|SS and (d) w|noSS motions extracted from the corresponding flow fields at various Reτ .Background shading indicates the bin size used to estimate the pdf , for which the total number of detected w|SS and w|noSS were used for normalization.Empty symbols indicate zero probability for the respective bin. Figure 13 : Figure 13: Conceptual representation of the main conclusion of this study: z-scaled eddies are likely the constituent motions forming the turbulent superstructures. Table 1 : Table summarizing details of datasets comprising synchronized measurements of u-and w-fluctuations at various wall-normal locations.Reτ for the various PIV datasets is based on δ estimated at the centre of the flow field (figure
12,609.8
2022-10-12T00:00:00.000
[ "Physics" ]
Names and Naming in Online Thrift Shop Based on Linguistic Anthropological Perspective (Penamaan Toko Barang Bekas Online Berdasarkan Perspektif Linguistik Antropologis) This study aims to examine names and naming in online thrift shops in Indonesia based on linguistic and anthropological perspectives. Names and naming as essential parts of language convey representation of an individual, object, places INTRODUCTION Names are important aspect of language.It will make us easier to identify something and communicate with other people.Names also help us to talk about something better (Valentine, et al. 1991;Cooper, et al. 2017).Names represent sign which stand for an object in order to create some respect of specific object (Noth 1990).Names also produced by mind and created certain concept.Names and naming are became the essential aspects of language, because they are used to label properties around us (Boonpaisamsatit 2007). There are several studies of names and naming which investigated in distinctive disciplines, such as sociology, anthropology, literature, etc. (Bacchielli 2005).This study provides the analysis of names and naming through linguistics anthropological point of view.It means that the examination of names and naming is viewed the world class of name in linguistic perspective and the social or cultural meaning. In linguistic perspective, names are divided into two.The first is the world class of names.The second is the meaning.Names are often included in subclass of noun.This view has been accepted by many scholars (Frege 1982;Russel 1905;Boonpaisamsatit 2007).Names also have specific meanings.This view is analyzed by several scholars.The first perspective is that meaning of a name has a sense of the word.Even though the same name can be used for different object, but it can have distinctive sense (Frege 1982). The second point of view is that names have two typical functions in everyday language, such as referential and vocative (1977; 1995).Referential has function to catch the listener's attention to the presence of the object being named.It also functioned to make the hearer remember of the existence of the object being named.Vocative function is used to get the attention of the person being summoned (Lyonns 1977). The third point of view is that there are seven types of meaning (Leech 1974).The first is denotative meaning or meaning based on dictionary.The second is connotative meaning or meaning based on the experience and beliefs of the individual.The third is social meaning.It is based on social circumstances.The social meaning includes the aspect of language variations, such as regional dialect variation, slang, etc.The fourth is affective of emotive meaning.It conveys personal feeling of the owner in the use of language.The fifth is reflective meaning.It means that a word may have multiple conceptual meaning.The sixth is collective meaning.This meaning consists of an association of words in certain environment, for instance; beautiful woman, handsome man, etc.The seventh is thematic meaning.A word may have certain message which delivered by the author.The examples and definitions of each types is based on several literature (Leech 1974;Umagandhi and Vinothini 2017;(Yunira, et al. 2019.This study will examine the meaning of name from linguistic point of view. In anthropology point of view, name and naming have three categories, such as name and identity; roles of names; and naming and culture (Boonpaisamsatit 2007).Name gives social identity to the owner.It deals with the identity of the relation between name's owner and others (Finke and Sokefeld 2018).The role of name in the society is indicating social relationship and self-representation (Watzalawik, et al. 2012).In the cultural perspective, naming is not only about labeling something, but also a cultural process (Thomas and Chwarbaum 2016).This study is also going to examine the category of names and naming from anthropological point of view. There are several phenomenon of names and naming in different research, such as giving personal names based on socio-cultural background (Al-Zumor 2009), choosing names based on cultural and identity background (Ngade 2011); selecting names in marketing food products and companies (Anderson 2016), deciding restaurant names that describe norms and values of certain people (Wulansari 2020).The different phenomenon of giving names to people, places, etc makes us understand the backgrounds and reasons of certain names are given.So, it is essential to acknowledge the meanings of name, because every name has different background and story behind it. Previous paragraph has explained selecting specific names for people or places also have background and story behind it.Selecting names is not only focused on people or places, but also a small business.One of the small business which quite popular is a second-hand business or thrifting business (Han 2013;Staff 2019;Bernstein and Alban 2020).Shopping for second-hand apparel is also increasing.The rise of second-hand or thrifting store online is getting more and more popular nowdays (Hobbs 2016).There is an increasing number thrifted clothes purchase in Indonesia (Herjanto, Scheller-Sampson, and Erickson 2016).Names of thrift shops also comes with a lot of variations. There are a lot of researches which focused on the consumer's decision or behavior in buying products in second-hand store or thrift shops.The result of the study shows that thrift shop brand names influences the consumer's decision in buying the product (Mitchell and Montgomery 2010;Wodon, Wodon, and Wodon 2013;Hochtritt 2019).However, there is less and almost none of the study focused on the owner of the thrift shop decision in selecting their names.So, this study is going to examine the background and story of names and naming in online thrift shops. METHOD This study was used a qualitative method along with descriptive analysis.This study examined the names and naming decision in online thrift shop, especially in Indonesia.The method of data collection was purposive sampling.The criteria were Indonesian online thrift shops which has more than 500 followers on Instagram. The technique of choosing the informant are based on the complexity and variations of region.There are 50 out of 300 online thrift shop's owners that interviewed in this study.The result of the interview shows that the age range of the respondent is between 17 until 27.The respondents are from Jakarta (36.5%),Bandung, (17.3%),Surabaya (13.5%),Bogor (5.8%), Bali (5,8%), and other cities like Sukabumi, Bekasi, Banyuwangi, Sidoarjo, Pontianak, Karawang, Medan, Yogyakarta, Kediri, etc. The study succeeded to collect 300 names of online thrift shop in Indonesia.The writer also interviewed several online thrift shops owners about their decision in using certain names in their thrift shops.However, there were only 50 thrift shop's owner that agree to be interviewed.So, this study only uses 50 names of online thrift shops. There were several things that the writer followed in analyzing the study.The first was shorted the thrift shop based on their followers on Instagram.The second classified the online thrift shop names into Indonesian name, English name, or mix name.The third was investigated the meaning of online thrift shops.Another thing was examined the background of owner's in choosing their thrift shop's name.The other thing was determined the relation between the meaning of online thrift shop name and the background of the owner. RESULTS AND DISCUSSION This section reveals the results of the study of names and naming online thrift shop based on linguistic anthropological perspective.There are several parts that is discussed in this study such as followers on Instagram, language of thrift shop names, meaning of online thrift shop names based on linguistic perspective, and online thrift shop names based on anthropological perspective. Followers on Instagram This chapter is presenting the online thrift shop's followers on Instagram.The amounts of followers of online thrift shops are different.There are 5 categorize of followers in this study.The first is > 50,000.The second is > 10,000.The third is 5,000-10,000.The fourth is 1,000-5,000.The fifth is 500-1,000.The percentage of follower's amount in online thrift shop can be seen in Graph 1.Based on Figure 1, the amount of followers of the first category (above 50,000) is 1 or 2%.The followers of the second category (above 10,000) are 12 or 24%.The followers of the third category are 8 or 16%.The total followers of the fourth category are 22 or 44%.The last category shows that the total followers are 7 or 14%. Figure 1 shows that the most common followers of the respondent are above 10,000. Languages of Online Thrift Shop Names The next step is classifying the names of online thrift shop based on the languages (English, Indonesian, or other languages).Table 1 provides the classification of language choices of online thrift shop owner in giving names.Based on Table 1, there are 37 out of 50 online thrift shops (76%) choose English as the language of online thrift shop names.There are 6 online thrift shops which decide to give names with Indonesian language (12%).There are two thrift shops that choose two kinds of language in online thrift shop names choice (8%).One of them is using Indonesian and English.The other is using Javanese and Indonesian.Another name is using Sanskrit and English.The other is using Japanese and Arabic.The rest of the online thrift shop names are using Sanskrit (2%), French (2%), Japanese (2%), and Korean (2%).It means that the online thrift shop owners in Indonesia tend to use English as their language choice in giving names in online thrift shop. Meaning of Online Thrift Shop Names based on Linguistic Perspective This chapter will discuss about the seven types of meaning based on linguistic point of view by Leech (1974).The classification of online shop names and seven types of meaning will be described in Table 2. Inside Her Clothes There are 2 or 4% of online thrift shop names which related to thematic meaning.Those thrift shops are Mivelous Thrift and Pretty Thrift Id.Both the thrift shop owner explains that their small business have specific meaning that they want to deliver to people.One of the thrift shop owners describes that Pretty Thrift means that every girl is pretty without any exception.Another owner says that Mirvelous means marvelous.She also wants people to remember the name easily. In addition, there is 1 or 2% of online thrift shop name which involve in connotative meaning. The thrift shop name is Bloomy Things.The owner is using the name inspired by Korean aesthetic spring season and flowers.The owner has her own imaginary meaning of her online thrift shop name based on another country seasonal moment. The rest of online thrift shop owners are using multiple meanings of their small business.There are 25 or 50% of online thrift shop names which involve multiple meanings.Those 25 online thrift shops are divided based on the multiple meanings appear on the online thrift shop names. The first is online shops that associate with reflected and collective meaning.There are 9 online shops in this category.Those online thrift shops are Heptacular, Dyes Id, Thriftagram, Hidtreassure, Raraa Collection, Se Lama, Thrift Sweet, Inn Cloth Id, and Baju Oma Yuk.One of the examples of reflected and collective meaning in online thrift shop name is Heptacular.Heptacular has two multi-conceptual meanings, because it is an abbreviation of the word hepta and spectacular.Hepta is a Greek language which has similar meaning with the word number seven. It is also related to seven children in the owner's family.The online thrift shop owner also describes her family as spectacular.So, the reflected meaning of Heptacular is seven children in a spectacular family.The collective meaning of the word Heptacular is related to the association of the abbreviation which becomes one word. The second is online thrift shop names that related to two distinctive meanings.There are 6 thrift shops that have two different meanings, such as reflected and thematic meaning (We Go Thrifty); connotative and reflected meaning (Thrifty Date and Moon Sibs); denotative and thematic meaning (Undiscovered Gems); connotative and collective meaning (Bunny Surus); as well as denotative and collective meaning (Oldie Goldies). The third is online thrift shop names that include in three different meanings.There are 5 online thrift shops that have three distinctive meanings, such as denotative, emotive, and collective meaning (Lajak Pakai); denotative, reflected, and thematic meaning (Thrift By Panmae); connotative, reflected, and thematic meaning (City Fun Store); social, reflected, and collective meaning (Muda Moody Sub); as well as reflected, collective, and thematic meaning (Komokun Id). Next is about online thrift shop names with four meanings.There are 4 online shops names which include four different meanings.Those online thrift shops are Craving Summer (connotative, emotive, reflected, and thematic meaning), Wear It With Love (denotative, emotive, reflected, and thematic meaning), Sun Circus (denotative, connotative, reflected, and thematic meaning), and Trace Luck (denotative, reflected, collective, and thematic meaning). The result of the study shows that 86% of the online thrift shop names have meanings.It means that the owner of online thrift shop choose their small business name with many reasons and considerations.So, the name of each online thrift shop is not only just an ordinary name, but there are a lot of stories behind it. Table 3 presents three categories of names and naming based on anthropological perspective.The first is names and identity.This study uses types of identity based on Finke and Sokefeld (2018), such as individual and collective identity.The difference of individual and collective identity is based the use of common word and unique words.The uniqueness of name represents personal identity.However, collective identity is a process of an acknowledgement of people as a member of a larger community.This process also involves an understanding themselves as a part of bigger category.Collective identity also involves variety of dimension, such as social class, language, ethnicity, race, religion, orientations, nationality, etc (Donahoe, et al. 2009).This study only focuses on personal identity, because collective identity involves many aspects which cannot be seen from one interview. The second is role of name in the society.There are 25 online thrift shop names which have selfrepresentation role.Those 25 names are defined and classified based on the result of the interview.The owners of online thrift shops describe the meaning of online thrift shop and the story behind it.Most of online thrift shop owner tells about their own role in choosing the name.One of the thrift shop owner describes the meaning behind Little Petite Diaries is because the owner has a petite body.Another owner explains that the name Next Level Space is started from the owner's interested in outer space things, such as planets, etc. In addition, there are 15 online thrift shops that have relationship with other people role.Those online thrift shop names have hopes and prayers behind the name.The owners provide some explanation about their hopes and prayers about the name.One of the online thrift shop owner explains about the meaning behind Komokun Id name.Komokun is an abbreviation of the word komorebi and kun.Komorebi is a Japanese word which means sunshine.Kun is an Arabic word which is a short of kun fayakun, it means "what is meant to be happened will happen."So, the owner hopes that people will be a sun which brightly rises for people on earth.Another online thrift shop owner describes the meaning behind City Fun Store.The word city has the same pronunciation with the owner's mother name (Siti).The owner adds the word fun after city and becomes City Fun, because she wants to give something fun to other people and make them happy. However, there are seven online thrift shops that do not have role of name.It is because the owners do not describe and explain about the story behind the choice of thrift shop names in details.So, this study cannot find the role of the name on seven online thrift shops. The third is name and culture.This study does not find any cultural background behind the choice of online thrift shop names.So, this study finds the anthropological aspect of online thrift shop names, such as personal identity of the name, and the roles of name in the society (selfrepresentation and relationship with other people). CONCLUSION The results of the study shows that online thrift shop names have meanings behind each name, such as denotative meaning, connotative meaning, social meaning, reflected meaning, collective meaning, and thematic meaning.Half of the online thrift shop has multiple meanings.It means that the owners of online thrift shop have many considerations and reasons before choosing the suitable name for their small business.The considerations and reasons are self representation and social relationship with others.Those considerations represent the owner's self identity.Online thrift names also reveal an identity of the owner as well as the role of those names in the society.However, online thrift shop names do not have cultural background and story. Figure 1 . Figure 1.Followers Amount of Online Thrift Shop Table 1 . Language Choice of Online Thrift Shop Names Table 2 Based on Table2, there are 7 or 14% of online thrift shop names that do not have meaning on their name.Those thrift shops are Different Class Garage, Dabeli Id, Look Look, Tokone Dea, Its Thrift, Sixeetz, and Ma Louloutee.This result means that the owners of online thrift shop give names in their small business without considering meaning behind it.There are 11 or 22% of online thrift shops which include denotative meaning behind their name.Online thrift shop which have denotative meaning are Akara Surih, Save & Shop, Little Petite Diaries, Reuse Able, Its Clothier, Preloved By Devs, Thrift By Zi, Take These Locana Thrift, Glitz Apparel, and Ingin Jual.One of the thrift shops which have denotative meaning in their name is Akara Surih.This name is coming from Sanskrit language which has two meanings.The first is akara which means visual representation, the other is surih which means used.So, the meaning of Akara Surih is second-hand goods which have impressive visual representation.There are 4 or 8% of online thrift shop names that involve in reflected meaning.Online thrift shops name that have reflected meaning are Inside Her Closet, Kita Pakai Lokal, Alpine Stuff, Val Preloved, and Next Level Space.One of the thrift shop which involves reflected meaning is Kita Pakai Lokal.Even though the name is related to "use local product or brand," the owner use the word local to describe the person who sell the product.So, the term local here has reflected meaning that has multi-conceptual meaning.The meaning of Kita Pakai Lokal based on the owner is local people who sell imported stuffs.It is related to a lot of local people who sells secondhand goods of thrift products.
4,177.6
2022-01-10T00:00:00.000
[ "Linguistics" ]
A standardised nomenclature for long non‐coding RNAs Abstract The HUGO Gene Nomenclature Committee (HGNC) is the sole group with the authority to approve symbols for human genes, including long non‐coding RNA (lncRNA) genes. Use of approved symbols ensures that publications and biomedical databases are easily searchable and reduces the risks of confusion that can be caused by using the same symbol to refer to different genes or using many different symbols for the same gene. Here, we describe how the HGNC names lncRNA genes and review the nomenclature of the seven lncRNA genes most mentioned in the scientific literature. | INTRODUCTION The HUGO (Human Genome Organisation) Gene Nomenclature Committee (HGNC) is the only group with the official capacity to name human genes. We name protein-coding genes, pseudogenes and non-coding RNA (ncRNA) genes; our commentary on our latest nomenclature guidelines 1 The naming of lncRNA genes is currently the main focus of our ncRNA naming work, in part due to the large numbers of these genes annotated in the human genome, and in part due to the many papers being published on the lncRNAs encoded by these genes. LncRNA genes are the only class of human genes, other than protein-coding (pc) genes, where research groups may suggest a symbol based on a function or important characteristic of the gene. The HGNC encourages research groups to contact us prior to publication to ensure that proposed symbols meet with HGNC guidelines. 1 Briefly, new human gene symbols should not clash with existing vertebrate gene symbols, commonly used abbreviations, or common English words; symbols should contain only uppercase Latin letters and Arabic numerals; symbols should not contain references to any species; symbols must not be pejorative or offensive. The use of punctuation is avoided although hyphens may be used in specific cases. Unique symbols have always been important to aid literature searching but are now more necessary than ever with the advent of text mining. HGNC curators search the scientific literature for papers on lncRNA genes; where published symbols do not fulfil HGNC guidelines we contact authors to discuss suitable alternatives. For this reason, we approved the unique symbol CHROMR, "cholesterol induced regulator of metabolism RNA," for the lncRNA gene first published as CHROME 3 and EMSLR, "E2F1 mRNA stabilising lncRNA," for the lncRNA first published as EMS. 4 Both CHROME and EMS are poor search terms, and "chrome" is a widely used English word. The HGNC has been naming lncRNA genes since the early 1990s but it is within the last decade that this endeavour has taken up a large proportion of our gene naming effort. HGNC approved lncRNA gene symbols are displayed in relevant biomedical resources such as Ensembl, 5 NCBI Gene, 6 RNAcentral, 7 LNCipedia, 8 OMIM 9 and GeneCards. 10 The HGNC provides a Symbol Report on our website (genenames.org) for each gene with an approved symbol that features links out to these and other relevant biomedical resources; Figure 1 shows an example Symbol Report for XIST. Where there is a mouse ortholog, we provide a link to the relevant page of the Mouse Gene Database. 11 Figure 2a demonstrates how rapidly the number of publications has increased with time for the seven most widely published lncRNA genes. We have provided a summary of the nomenclature of each of these seven lncRNAs below. These examples illustrate many of the typical issues we consider while naming genes. | XIST The XIST (X inactive specific transcript) gene was first published in 1991 12 and the symbol was approved by the HGNC in the same year. As of April 2022, there were over 1,900 hits in PubMed for the XIST symbol ( Figure 2a) with no other competing gene symbols in general use and no overlapping use of the abbreviation to refer to different concepts. XIST is conserved in eutherians and contains two exons derived from a pseudogene that has a coding ortholog from the LNX (ligand of numb-protein X) family, published as Lnx3, at a conserved position in birds, reptiles and amphibians. However, the majority of XIST exons contain sequence derived from mobile elements that is completely unrelated to the pseudogene. 13,14 XIST is necessary for inactivation of one X chromosome in cells with two copies of this chromosome; please see 15 for a recent review on the mechanisms by which XIST achieves this. Notably, the XIST sequence element known as "Repeat A" that has been shown to be necessary for gene silencing is not located within the pseudogene-derived sequence. 14 | H19 The H19 symbol was approved by the HGNC in April 1994 based on 16 who stated that "Despite the fact that it is transcribed by RNA polymerase II and is spliced and polyadenylated, we suggest that the H19 RNA is not a classical mRNA. Instead, the product of this unusual gene may be an RNA molecule." The H19 symbol is also approved for the mouse and rat orthologs; in all three species this lncRNA gene shows sequence similarity and hosts the microRNA gene MIR675 in an exon. The symbol H19 should be viewed as historical as it does not represent a characteristic or function of the gene; this is an example of a gene symbol that the HGNC will retain as it is supported by the lncRNA community and widely published ( Figure 2a). H19 originates from a paper on mouse fetal-specific hepatic mRNAs and the assumption is that the "H" stood for hepatic although this is not explicitly stated; this paper already commented that H19 is expressed in heart and skeletal muscle. 17 The original HGNC-approved gene name that accompanied the H19 symbol was "H19, imprinted maternally expressed untranslated mRNA" but this has since been updated to "H19 imprinted maternally expressed transcript" because the term mRNA is now used only for genes that produce transcripts which are translated into protein. H19 is expressed in the foetus and placenta; the current approved name reflects the fact that this imprinted gene is expressed from the maternal allele. This is in contrast with the neighbouring protein coding gene IGF2, which is also highly expressed in the placenta but is expressed from the paternal allele. 18 H19 is found in some adult tissues such as skeletal muscle and the adrenal gland, and its dysregulation has been associated with many types of cancer although there are contrasting theories about its involvement in the progression of these cancers. 19 | MEG3 MEG3 (maternally expressed gene 3) is another maternally imprinted lncRNA gene. This gene was originally approved with the symbol GTL2 (gene trap locus 2) based on the identification of the mouse ortholog from the site of a gene trap integration. 20 It was subsequently renamed to MEG3 to be grouped with other maternally imprinted genes using the MEG# root symbol 21 in mouse and human -MEG8 and MEG9 are approved symbols for other lncRNA genes. Like H19, MEG3 has been associated with many types of cancer and has been reported to be a tumour suppressor gene via regulation of TP53, 22 by separate regulation of RB1, 23 and by suppression of angiogenesis. 24 Figure 2b shows usage of GTL2 versus MEG3 over time and shows how MEG3 is now the symbol supported by the lncRNA community. F I G U R E 1 An example Symbol Report for the lncRNA gene XIST from genenames.org. HGNC Symbol Reports present the HGNCapproved gene symbol, gene name, unique HGNC ID and other manually curated data in the top HGNC data section. The "Stable symbol" luggage tag is shown at the top of the report for approved symbols which are unlikely to ever be changed. Further down the report, links to many different biomedical resources are provided. Here, we have highlighted the resources that are particularly relevant to lncRNAs The GTL2 symbol has been retained in the MEG3 entry as a "previous symbol" in line with HGNC's normal practise of retaining all previously approved gene symbols. | HOTAIR HOTAIR (HOX transcript antisense RNA), which lies antisense to the protein coding HOXC11 gene, was The number of publications in PubMed for the top seven most highly published lncRNA genes. (a) For each of the seven highly published lncRNA genes, the number of publications has rapidly increased over the last 5 years. (b) For all of the most highly published lncRNA genes, the majority of publications use the current HGNC approved symbol. The first chart shows how over time the number of publications supporting the approved symbol MEG3 have increased compared to the previous symbol GTL2. The second chart shows NEAT1 and its published aliases (VINC, MENbeta, MENepsilon, TncRNA); the usage of NEAT1 far surpasses any of its aliases within the last decade. The third chart compares usage of the approved symbol MALAT1 and its published alias NEAT2; again MALAT1 is highly supported. The other four most highly published lncRNA symbols have negligible numbers of publications that do not use the approved symbol approved in 2007 based on. 25 This lncRNA was initially reported to regulate genes at the HOXD locus. It has since been reported as positively regulating HOXC11 levels in cis and negatively regulating HOXD in trans, perhaps due to a duplicated noncoding element within the HOTAIR gene and HOXD locus. 26 This lncRNA has also been associated with many types of cancer. 27 HOTAIR has a mouse ortholog named Hotair, and mouse models have been reported with contrasting phenotypes. 28 We now have a more systematic way of reporting genes that are antisense to protein coding genes (see the "Systematic protocol" section below), and the symbol "HOTAIR" could be considered somewhat frivolous which we avoid where possible, but we will retain the HOTAIR symbol due to overwhelming usage. | NEAT1 Two transcripts produced by the NEAT1 gene were first published as MENbeta and MENepsilon in a paper about the transcript map surrounding the MEN1 locus, 29 but these two transcripts were not further characterised at that time. The NEAT1 gene is over 620 kb downstream from the MEN1 gene, with many intervening protein coding genes between these two loci, and it has not been associated with the MEN1 gene functionally, so a symbol linking this gene to MEN1 is not optimal. A short transcript from the NEAT1 locus was described as "trophoblast noncoding RNA" (TncRNA) 30 but this isoform is not found in the mouse ortholog (Neat1) and "TncRNA" is not unique as it is also used as an abbreviation for both "telomeric ncRNA" and "tiny ncRNA" so would not be a suitable gene symbol. Additionally, the longer isoforms of NEAT1 are widely expressed so nomenclature linking this gene specifically to the trophoblast would be misleading. The symbol NEAT1 was first used in a study that identified large noncoding RNAs displaying nuclear enrichment. 31 The name accompanying the symbol was " nuclear enriched abundant transcript 1," which has been recorded as a gene name alias by the HGNC. The HGNC were contacted in 2009 by a researcher writing a review on this gene who requested that NEAT1 could be approved for the human gene and Neat1 for the mouse ortholog. The HGNC coordinates with the Mouse Genomic Nomenclature Committee wherever possible to approve equivalent nomenclature for mouse and human orthologs. At that time the human and mouse transcripts had been shown to be necessary for the formation of paraspeckles in the nucleus, 32 and therefore the HGNC agreed upon a name that reflected this function and that could be approved alongside the NEAT1 symbol: "nuclear paraspeckle assembly transcript 1." NEAT1 also has the alias VINC (virus inducible non-coding RNA) based on its detection in mouse brains infected with Japanese encephalitis virus or Rabies virus. 33 As can be seen from Figure 2b, the NEAT1 symbol is overwhelmingly supported by the research community over any of its aliases. | MALAT1 MALAT1 (metastasis associated lung adenocarcinoma transcript 1) was first identified in a study to find differences in gene expression between tumours of non-small cell lung cancer that metastasised and those that did not. 34 MALAT1 is located close to NEAT1 in the genome of both human and mice and is highly expressed in both species. MALAT1 is localised to nuclear speckles and hence has been given the alias NEAT2, 31 but unlike NEAT1 it is not required for assembly of paraspeckles. The NEAT2 alias is far less published than MALAT1 (Figure 2b). The MALAT1 locus also produces a small cytoplasmic tRNA-like transcript via tRNA processing ribonucleases known as mascRNA (MALAT1-associated small cytoplasmic RNA). 35 Although not restricted to lung cancers, overexpression of MALAT1 has been associated with metastasis in several different types of cancer, 36 though a smaller number of studies have reported that the lncRNA has a tumour suppressor role in some cancers. As the MALAT1 symbol is very well supported, the HGNC has no plans to change this symbol, but we would consider updating the accompanying descriptive gene name in the future to something more informative, if there is community support to do so. | PVT1 The PVT1 symbol was first used for the mouse ortholog (Pvt1) following its discovery as the major locus for murine plasmacytoma variant translocations. 37 The human ortholog was subsequently found in Burkitt's lymphoma translocations. 38 The HGNC originally approved the gene name "pvt-1 (murine) oncogene homolog" as the descriptive name accompanying the approved PVT1 symbol, but we have since updated this to the simpler name "Pvt1 oncogene," which reflects how this gene is described in many papers. The HGNC no longer references other species in gene names to reduce possible confusion. Studies have reported that the PVT1 promoter regulates the MYC gene, and that presence of the PVT1 transcript is not necessary for this function. 39 The PVT1 gene hosts several microRNA genes and has widely been reported to be able to compete for binding of micro-RNAs. 40 Because it is a microRNA host locus, it also has the alias symbol MIR1204HG based on the most 5 0 miRNA gene in the locus. The PVT1 symbol is highly published and is unique to this gene. | MORE RECENT EXAMPLES OF lncRNA SYMBOLS APPROVED BASED ON PUBLICATIONS We hope that many of our more recently-approved lncRNA gene symbols will achieve the same level of support as the above symbols in the scientific literature in the future. Recent examples of approved lncRNA gene symbols that reflect the function of the encoded lncRNA include RENO1 for " regulator of early neurogenesis 1," 41 COSMOC for "cell fate and sterol metabolism associated divergent transcript of MOCOS" 42 and CPMER for "cytoplasmic mesoderm regulator." 43 All of these symbols were agreed with the HGNC prior to publication. We were able to approve the symbol NXTAR post publication 44 but we updated the gene name, with the agreement of the authors, from the published name "next to androgen receptor" to the more functionally informative name "negative expression of androgen receptor regulating lncRNA," which still fits with the NXTAR symbol. | THE HGNC "STABLE" TAG As outlined in the HGNC guidelines, 1 we are now committed to keeping the symbols of clinically relevant genes as stable as possible, and minimising changes to well-published gene symbols. In the era of clinical genomics, it is impossible to contact all clinicians, patient groups, charities and interested individuals to inform them of symbol changes, so it is important that the symbols of genes referred to in the clinic are kept as stable as possible. HGNC curators are currently working through a list of clinically relevant genes and adding a "stable" tag onto the Symbol Reports for these genes once curators are satisfied that the approved symbols are appropriate and are unlikely to be changed (see the top of the XIST Symbol Report shown in Figure 1). We have added this tag to over 40 non-coding RNA genes to date, including the two clinically relevant lncRNA genes, MIR17HG and PCA3. MIR17HG has been associated with Feingold syndrome type 2 as shown in the GenCC (Gene Curation Coalition, 45 ) database, while there is now a clinical test that evaluates levels of PCA3 RNA to help assess prostate cancer risk. 46 We have also added the stable tag to the seven highly published lncRNA genes described above, as we have no plans to change these symbols. | SYSTEMATIC PROTOCOL FOR NAMING ANNOTATED HUMAN lncRNA GENES In addition to approving lncRNA symbols based on published data, the HGNC has a systematic protocol for naming lncRNA genes that have been manually annotated by the RefSeq annotators at the National Center for Biotechnology Information (NCBI) 6 and/or the GENCODE annotators at Ensembl. 5 Note that the HGNC has a large set of unnamed lncRNA genes to work through; we currently prioritise genes that are mentioned in publications but have no suitable information for a non-systematic symbol, and lncRNA genes that have been annotated by both of the above-mentioned manual annotation projects. The eight categories, along with the non-systematic category based on published data described above, used for this systematic naming are shown in Figure 3. Please also see the decision-making chart published as fig. 1 in Reference 1 and a more detailed description of each lncRNA naming category in Reference 2. The eight systematic categories of lncRNA genes are as follows: • if an lncRNA gene hosts a microRNA gene in an exon or intron it is named as a microRNA non-coding host gene with the symbol format [microRNA symbol]HG, for example, MIR7-3HG • if an lncRNA gene hosts a small nucleolar (sno)RNA gene it is named as a small nucleolar RNA non-coding host gene with the root symbol SNHG for example, SNHG3 • if an lncRNA gene is intergenic with respect to protein-coding genes it is named as a long intergenic non-protein coding RNA with the root symbol LINC followed by a unique five digit number, for example, LINC02998 • if an lncRNA gene overlaps the genomic span of a pc gene but is located on the opposite strand compared to that pc gene it is named as an antisense RNA with the symbol format [pc symbol]-AS suffixed with a unique number, for example, ABCA9-AS1 • if an lncRNA gene overlaps at least one exon of a pc gene on the same strand, it is named as an overlapping transcript with the symbol format [pc symbol]-OT suffixed with a unique number, for example, PCBP2-OT1 • if an lncRNA is contained within an intron of a pc gene it is named as an intronic transcript with the symbol format [pc symbol]-IT suffixed with a unique number, for example, HAO2-IT1 • if an lncRNA gene shares a bidirectional promoter with a pc gene it is named as a divergent transcript with the symbol format [pc symbol]-DT for example, CIBAR1-DT • if an lncRNA gene has another lncRNA paralog in the human genome, these paralogs may be named with the FAM root symbol (family with sequence similarity), for example, FAM182A and FAM182B. Note that the FAM root symbol is also used for pc genes, but these can be distinguished via locus type. Although the above protocol is applied where no other suitable information is available at the time of naming, these symbols can become well-established in the literature and so may not necessarily be updated when further data are published, unless there is agreement between research groups working on the genes to do so. Where there is an ortholog in other species, the HGNC may pursue a rename in order that the orthologs be approved with the same symbol and name. For example, the human lncRNA gene DUBR (DPPA2 upstream binding RNA) had the previous symbol LINC00883, while mouse gene Dubr had the previous symbol 5330426P16Rik. | A CAUTIONARY NOTE ON THE IMPORTANCE OF APPROVED GENE NOMENCLATURE During our literature searches for papers on lncRNA genes, HGNC curators have noticed that many papers continue to use names based on BAC clones in the human genome assembly, which were used in previous versions of the Ensembl website as symbols, or primary identifiers, for lncRNAs. These clone-based identifiers used to be displayed on Ensembl gene reports for human genes that had no HGNC symbol, but have now been removed completely and are not searchable in the current version of the Ensembl website. We recently found | PROTEIN CODING GENES THAT WERE PREVIOUSLY ANNOTATED AS lncRNA GENES It may be surprising to consider that most lncRNA genes contain open reading frames (ORFs) but these are usually short in length, unsupported by conservation in other species, lack structural features such as protein domains, and are not supported by peptides from mass spectrometry. Post annotational experimental evidence may show that such ORFs are translated and therefore the locus types of lncRNA genes may be updated to protein coding. The following genes were updated based on published data: MTLN -mitoregulin 51 has the previous symbol LINC00116; GREP1 -glycine rich extracellular protein 1 52 was previously LINC00514; NBDY -negative regulator of P-body association 53 was previously LINC01420. Although the HGNC will usually rename such genes, particularly if a new symbol is proposed by authors, in some cases we may retain the gene symbol and only update the gene name. This is the case for the gene TINCR as this is a well-published symbol that has been retained, while the locus is now annotated as protein coding. The gene name is now "TINCR ubiquitin domain containing" in place of the previous gene name "tissue differentiation-inducing non-protein coding RNA." The TINCR symbol is also still used in papers discussing the protein. 54,55 Note that there are still many recent papers describing TINCR as an lncRNA; it is possible that this gene has both coding and non-coding isoforms but this is true for many protein-coding genes and merits discussion. The HGNC does not approve separate symbols for non-coding isoforms of protein-coding genes, for example, ECRAR (endogenous cardiac regenerationassociated regulator) 56 is listed as an alias of the protein coding PTTG1 gene because ECRAR represents a noncoding variant. 14 | GROUPING TRANSCRIPTS TOGETHER AS lncRNA GENES For protein-coding genes the presence of ORFs provides information to gene annotators on when a set of overlapping transcripts should be grouped into the same gene or split into different genes. There is no equivalent information for lncRNA genes, which means that criteria need to be agreed upon between different annotation groups as to when transcripts should be grouped together as an lncRNA gene and when they should not. The HGNC plans to host a workshop on this subject with annotation groups and selected lncRNA researchers to decide upon guidelines for this issue. We hope that this will result in consistent grouping of transcripts into lncRNA gene models in the future. | CONCLUSION The field of lncRNA research continues to grow rapidly each year. Consistent use of approved gene symbols for lncRNA genes will mean that all research papers and associated online resources are easily searchable for lncRNAs. We encourage researchers publishing on new lncRNA genes to contact the HGNC prior to submission. This will enable HGNC curators to check that the proposed symbol follows our guidelines and will prevent changes to gene symbols post publication. HGNCapproved symbols appear on our website, www. genenames.org, and in many key lncRNA resources. ACKNOWLEDGEMENTS We thank all members of the HGNC for their helpful discussions on the naming of lncRNA genes and particularly the HGNC alumnus, Dr Matt Wright, for all of his hard work on lncRNAs. The HGNC is funded by Wellcome Trust grant 208349/Z/17/Z and the National Human Genome Research Institute (NHGRI) grant U24HG003345. All authors have read and approved the final manuscript. The contents of this paper are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health. CONFLICT OF INTEREST The authors have no conflicts of interest to report.
5,541.2
2022-07-26T00:00:00.000
[ "Biology" ]
Plasmonic hot electrons for sensing, photodetection, and solar Plasmonic hot electrons for sensing, photodetection, and solar energy applications: A perspective energy applications: A perspective In plasmonic metals, surface plasmon resonance decays and generates hot electrons and hot holes through non-radiative Landau damping. These hot carriers are highly energetic, which can be modulated by the plasmonic material, size, shape, and surrounding dielectric medium. A plasmonic metal nanostructure, which can absorb incident light in an extended spectral range and transfer the absorbed light energy to adjacent molecules or semiconductors, functions as a “plasmonic photosensitizer.” This article deals with the generation, emission, transfer, and energetics of plasmonic hot carriers. It also describes the mechanisms of hot electron transfer from the plasmonic metal to the surface adsorbates or to the adjacent semiconductors. In addition, this article highlights the applications of plasmonic hot electrons in photodetectors, photocatalysts, photoelectrochemical cells, photovoltaics, biosensors, and chemical sensors. It discusses the applications and the design principles of plasmonic materials and devices. cancer therapy from plasmonic heat effect, 5–7 photodetection, 8 photovoltaics, 9 photocatalysis, and photochemistry. 10–13 This review article attempts to present the perspective and the correlations of three application fields of plasmonic hot electrons—sensing, photodetection, and solar energy conversion. I. INTRODUCTION The term "hot carriers" defines the charge carriers (electrons and holes) in a non-equilibrium state with larger energy than in the thermal equilibrium state. Generally, high temperature, light (ultraviolet light, typically), and high electric fields excite or extract hot carriers. In contrast, surface plasmon resonance (SPR) is unique to excite hot carriers. In properly designed nanostructures of metals (typically Au, Ag, and Cu), free electrons will oscillate collectively if the incident light matches the resonant frequency of the collective electrons, known as SPR, including localized surface plasmon resonance (LSPR) and surface plasmon polaritons (SPPs). The collective oscillation, which is called plasmon, dephases quickly, releasing the energy stored in the plasmon through (i) far-field light scattering (radiatively), (ii) near-field electromagnetic field enhancement (non-radiatively), (iii) hot carrier generation, and (iv) plasmonic heat effects. The branching ratio of these processes depends on the size, shape, and local media surrounding the plasmonic metal nanostructures. Each of these energy transfer processes can be used for a specific application. Some review articles have dealt with the plasmonic applications in enhanced light trapping from the farfield radiation effect, 1,2 surface-enhanced Raman scattering (SERS) or enhanced infrared absorption from the near-field enhancement effect, 3,4 water steam generation (seawater desalination), A. Generation and timescale of hot carriers in metallic nanostructures For an isolated metal nanostructure under light illumination, excitation of SPR results in light collection from an area much larger than its geometrical cross-sectional area [ Fig. 1(a)]. 19 Subsequently, the plasmon (coherent electron oscillations) can dephase non-radiatively through Landau damping, generating hot electronhole pairs at the timescale of 1-100 fs [ Fig. 1(b)]. 20,21 The electrons from the occupied energy levels are excited above the Fermi energy, reaching the energetic level of E F + ̵ hω LSPR , where E F and ̵ hω LSPR refer to the Fermi level and LSPR energy, respectively. These high energetic hot electrons will quickly transfer their energy to the thermal equilibrium electrons, which leads to the deviation from the equilibrium level and the in-turn effect on hot electrons. Meanwhile, the generated hot electrons redistribute their energy to the lowerenergy electrons. 22,23 In other words, multiplication of hot electrons takes place via the electron-electron scattering processes, resulting in the energy loss of partial hot electrons. 24 The percentage and population of hot carriers among the whole charge carriers depend on the electronic structure of metal nanostructures and incident photon energy. 25,26 At the timescale of 100 fs to 1 ps, hot carriers are finally thermalized to a Fermi-Dirac-like distribution [ Fig. 1(c)]. 27,28 With a change in the velocity of partial electrons due to the electronelectron interaction, the interactions with phonons increase. The interaction between the hot electrons and phonons continues over a longer timescale of several picoseconds, which will result in a quasiequilibrium state between the electron system and the phonon system, elevating the lattice temperature. 13,28 Finally, heat is transferred to the surroundings of the metal structure at the timescale of 100 ps to 10 ns [ Fig. 1(d)]. B. Hot electron transfer When adsorbate molecules or semiconductors are directly attached to a metal nanostructure, hot electrons can be captured and extracted to the adsorbates or semiconductors before thermalization into heat. This provides a new photoconversion route for photochemistry, photovoltaics, photodetection, and sensing. 10,29,30 In general, there are two pathways for hot electron transfer, indirect transfer and direct transfer. 10,31 In the indirect transfer process, 32,33 hot electrons are first generated in the plasmonic metal, and some of them transfer to either the adsorbate or the semiconductor. Owing to rapid relaxation via electron-electron scattering, only a very small fraction of plasmonic hot electrons with high potential energy, which are able to overcome the energy barrier at the interface, can be involved into the indirect transfer process. The hot electron injection process is competing with the electron-electron scattering process. This is one of the reasons that the indirect transfer process exhibits a very low efficiency (typically <2%). Another reason is that the back transfer of electrons may take place at the interface, especially in the case of a low energy barrier height. For the direct hot-electron transfer process, [34][35][36] the presence of suitable empty hybridized orbitals resulting from the strong interaction of the metal with the adsorbate molecules or closely contacted semiconductors facilitates the direct generation of hot electrons in the empty hybridized orbitals during dephasing of plasmon. Compared to the indirect electron transfer, the direct transfer process has higher transfer efficiency and lower energy loss. This is because the indirect transfer pathway suffers from a lot of energy loss in the electron-electron and electron-phonon scattering before transferring and in the hot electron injection across the heterojunction interface. However, in the direct electron transfer pathway, hot electrons are directly generated in the hybridized orbitals due to Landau damping, which avoids the electron-electron scattering in the metal and energy loss during the injection process across the interface. After injection, hot electrons will interact with the semiconductor or adsorbates and/or back transfer to the metal, which competes with the hot electron transfer processes, creating a barrier for an efficient utilization of plasmonic hot electrons. Metal-adsorbate complexes Calculation and experiments reveal that two significant Raman peaks of p-aminothiophenol (p-ATP) on the nanostructured Au or Ag substrates appeared under laser excitation, which was due to the formation of a new chemical species, p,p ′ -dimercaptoazobenzene (DMAB), by coupling of adjacent p-ATP molecules. 29,[37][38][39] This gave direct evidence of hot electroninduced chemical transformation on plasmonic metal nanoparticles (NPs). The two-step indirect transfer is considered to be one of the pathways for the hot electron-driven chemical reaction [ Fig. 2(a)]. It assumes that hot electrons are first generated within metal nanoparticles resulting from Landau damping. Subsequently, hot electrons with suitable energy can transfer into the lowest unoccupied molecular orbitals (LUMOs) of adsorbates across the interfacial barrier, followed by the thermalization process (interaction with the adsorbates). 40,41 In this case, the hot electron transfer efficiency is dependent on the incident photon energy, the interface of the metal/adsorbate molecules, the density of states of the metal nanostructures, and the energy of the LUMO (i.e., the plasmonic energetic level is higher than the LUMO of adsorbates). Since hot electrons are excited in the metal and then transferred out, the optimum efficiency for hot electron transfer occurs when the incident wavelength matches the peak wavelength of the SPR of metal nanostructures. Recently, theory and experiments revealed that the timescale for hot electron generation and transfer from the metal to adsorbed molecules was, in some cases, much shorter than that expected with the indirect transfer mechanism (Fig. 3). [34][35][36] This has aroused an interest in the direct electron transfer pathway originating from the chemical interface damping (CID) [ Fig. 2(b)]. The strong interaction of the metal and the adsorbates results in orbital hybridization. With the suitable orbital overlap, the plasmon excited by light illumination will directly dephase, generating hot electrons in the empty hybridized orbitals on the adsorbate side. Because there are neither electron-electron scattering nor energy losses due to interface injection, the theoretical maximum efficiency is much higher than that of the indirect transfer process. The optimum efficiency is achieved when the light wavelength meets two conditions simultaneously: (i) the incident light wavelength can excite the SPR and (ii) the energetic level of the SPR is powerful enough to induce the HOMO-LUMO transition of the hybridized surface states. 42 In this case, compared to the conventional electron-electron scattering and electron-phonon scattering in metals, a new thermalization pathway is revealed in the metal/adsorbate hybrid system known as chemical interface scattering (CIS). [43][44][45] Hot electrons transiently transfer into the adsorbates and transfer back into the metal, leaving a portion of energy in the adsorbates. This process occurs repeatedly, resulting in accumulation of energy within the adsorbates and activating the adsorbates. Such a cycling process can effectively extend the lifetime of hot electrons (∼10 ps), as shown in Fig. 3. This is obviously beneficial to photochemical reactions. Metal-semiconductor heterojunctions A plasmonic metal was used for enhancing the photoelectrochemical activity in the 1990s. 46,47 The direct evidence of plasmonic hot electron injection to a semiconductor was demonstrated in the 2000s. 32,33,48,49 There are also two different routes for the hot electron transfer in the metal/semiconductor system. As shown in Fig. 2(c), hot carriers are first generated in the metal through plasmonic dephasing, and then the hot electrons with sufficient energy can overcome the Schottky barriers at the metal/semiconductor and enter the conduction band (CB) of the semiconductor. Recent studies have revealed that the hot electron injection is strongly dependent on the LSPR energy with respect to the Schottky barrier height. [50][51][52][53][54] The plasmonic hot electron injection occurs only when the LSPR energy is sufficient for overcoming the Schottky barrier. If the Schottky barrier is too high, only a small portion of hot electrons have sufficient energy to overcome it. However, if the Schottky barrier is too low or absent, the hot electrons, which had been injected into the semiconductor CB, will quickly transfer back to the metal. 11 In addition, the electron transfer efficiency is dependent on the quality of interface (i.e., defects, the interaction between the metal and semiconductor) and the incident light energy. The maximum theoretical efficiency for an indirect hotelectron transfer to a semiconductor is calculated as 8%. 41,55 However, the experiment-realized efficiency is much lower than this, typically <2%. 30,[56][57][58][59][60][61][62] The hot electron transfer timescale for an Au/TiO 2 heterostructure was demonstrated to be less than 50 fs by transient absorption spectroscopy. [50][51][52][53][54] This was much faster than that expected in the indirect transfer mechanism (Fig. 3). Based on the timescale and energy transfer efficiency, a direct transfer mechanism was proposed and confirmed experimentally, 63,64 which was analogous to the CID process in the metal/adsorbate case. In this model, the plasmonic dephasing in metal/semiconductor heterostructures directly excites electrons to the empty hybridized states, which are centered in the semiconductor with holes left in the occupied hybridized states, which are centered in the metal [ Fig. 2(d)]. C. Energetic distribution of hot carriers The energy distribution of plasmonic hot carriers is narrowband and strongly dependent on the metal, particle size, geometry, medium, and carrier lifetime. 20,25,26,41,[65][66][67][68][69][70] Since the lifetime of hot carriers is very short, these dependences are revealed by theoretical simulation originally. There is a strong reason to develop accurate experimental techniques with high time solution. Small plasmonic nanoparticles are more efficient for hot carrier generation than the larger nanoparticles. Nordlander et al. calculated the energy distribution of hot carriers in Ag nanoparticles (NPs). 26 The results revealed that larger Ag NPs had a higher production rate of hot carriers but with a lower carrier energy [ Fig. 4 energy distribution of hot carriers. For longer lifetimes, plasmon decay results in highly energetic hot carriers, whereas for shorter carrier lifetimes, the energy is closer to the Fermi energy of the metal (very low energy). The geometry, including the shape and aspect ratio, also significantly affects the generation and ejection of hot carriers due to carrier confinement and surface scattering effects. 68,71 The same conclusion was demonstrated by Brown et al., 65 showing the influence of phonon, geometry, and classic resistive effects with the Au nanoparticles [ Fig. 4(b)]. Obviously, the energy distribution of hot carriers is highly dependent on the size of plasmonic nanoparticles. Different plasmonic metals with different electronic band structures have a diverse energy distribution. Atwater et al. revealed that Au and Cu produced holes with a higher energy of 1-2 eV than electrons, while Ag and Al have equal energy for both electrons and holes [ Fig. 4(c)]. 69,70 They also found the momentumdirection distributions for hot carriers that were anisotropic [ Fig. 4(d)]. It is important to understand the energetic distribution of hot carriers in the plasmonic metal itself. It is also essential to clarify the energetic distribution of hot carriers after they are transferred from the plasmonic metal to the surface adsorbates or to the adjacent semiconductors. In conventional organic dye-or quantum dot-sensitized semiconductors, the electrons are quickly thermalized to the conduction band edge of the semiconductor after the photo-excited electrons are transferred from the organic dye (or quantum dot) to the semiconductor. That is, the transferred electrons exhibit a thermal distribution in energetics. In contrast, in plasmonic metal-semiconductor heterojunctions, the transferred electrons in the semiconductor show a non-thermalized distribution in energetics after plasmonic hot electrons are injected from the metal to the conduction band of the semiconductor if the semiconductor layer is thin. 72 That is, the plasmonic hot electrons transferred to the semiconductor are still "hot." This indicates that the plasmon-sensitized semiconductor could show a higher open voltage for photovoltaics and a larger thermodynamic driving force for photocatalytic reactions, as compared to the conventional organic dye-or quantum dot-sensitized semiconductors. The non-thermalized distribution of the transferred hot electrons in the semiconductor can be tuned by the shape of the plasmonic metal nanostructure. 72 Among three types of Au@TiO 2 core-shell nanoparticles with nanospheres, nanorods, and nanostars as the cores, more hot electrons are distributed at relatively high energetic levels for Au nanostar@TiO 2 and Au nanorod@TiO 2 . D. Analytical tools for hot carrier characterization It is important to experimentally track the excitation and dephase of plasmon in a time-resolved manner. However, the excitation and collective behaviors of plasmon under a pulsed light excitation are extremely fast (typically less than 10 fs), which made it almost impossible to be measured with the state-of-the-art analytical techniques. Current transient absorption spectroscopy (TAS) generally exhibits a time solution of ∼30 fs, which enables tracking the late-stage dephase of plasmon and the hot electron transfer from the metal to the semiconductor (or adsorbate). In addition, x-ray absorption near edge structure (XANES) provides an effective tool for measuring the energetic level of plasmonic hot electrons. Currently, TAS and XANES are two major analytical tools for characterization of plasmonic hot electrons. 1. X-ray absorption near edge structure (XANES) spectroscopy X-ray absorption spectroscopy (XAS) is a powerful analytic tool for characterizing the electronic, structural, and magnetic properties of materials with not only high crystallinities but also short-term ordered or even amorphous structures. XANES utilizes synchrotron x-ray radiation as the excitation source, exhibiting high intensity, continuous spectrum, excellent collimation, and tiny probe size. The core electrons are excited to the unoccupied electronic states within the analyte. As such, the absorption edge corresponds to the binding energy of a core-level state that is characteristic of the energetics and chemical shift. The CB of various semiconductor materials is composed of different orbitals. For instance, the CB of wurtzite ZnO mainly comprises the Zn 4s and 4p states. 73 The CB of rutile TiO 2 is dominant by the Ti 3d orbitals. 74 The absorption edges in XANES spectra can be used to investigate the electronic transition from the core levels to the CB of semiconductors. The Ti L-edge absorption can be used to resolve the CB of TiO 2 since the excitation involves the 2p and 3d orbitals. In contrast, the details of CB in ZnO can be revealed by the Zn K-edge absorption, which is attributed to the electronic transition from the 1s to 4p states. Moreover, the XANES intensity is related to the electron occupancy of CB in the semiconductors. Thus, the population of hot electron injection to the semiconductor from the plasmonic metal can be characterized by XANES, as illustrated in Fig. 5(a). Amidani et al. prepared pure and N-doped TiO 2 , which were decorated with Au-NPs with an LSPR band at ∼550 nm. 75 The preedge region of Ti K-edge absorption spectra of the TiO 2 /Au and N-TiO 2 /Au powder obtained under irradiation of a 532 nm laser was different from those in the dark. Pure TiO 2 cannot be excited by the 532 nm laser due to a large bandgap, while N-TiO 2 was able to be slightly excited by a 532 nm laser because its band structure was altered by N doping. However, the shape and amplitude of the laser on/off spectral differences for TiO 2 /Au and N-TiO 2 /Au powders were analogous. The change in the Ti K-edge absorption spectrum induced by the 532 nm laser did not result from the plasmon-induced resonance energy transfer (PIRET), which would be energetically allowed only for N-TiO 2 /Au, but not for TiO 2 /Au. Hence, hot electron injection from the Au-NPs to the CB of TiO 2 was believed to cause its electronic transition difference. Therefore, Hung et al. investigated the plasmonic effect on the CB of TiO 2 nanorods (TiNR) using the Ti L-edge XANES spectra since the electrons in the 2p orbitals are excited to the 3d states. 74 As compared to the dark condition of TiNR-Au, a positive variation was shown under visible light irradiation. Electromagnetic fields of plasmonic Au significantly induced the vacancies in the CB of TiNR and enhanced the possibility of Ti L-edge excitation. However, further modifying IrOx on the Au surface provided a considerably opposite behavior, as compared with TiNR-Au. The Ti L-edge peak intensity of TiNR-Au-IrOx was reduced under visible light irradiation. This observation suggests that the hot holes rapidly transferred from Au to IrOx by accelerating the kinetics of hot holes, improving the separation of hot carriers. The increased amount of hot electrons injected into the CB of TiNR resulted in a decrease in the Ti Ledge excitation intensity. By modifying IrOx on the Au surface, the recombination of hot carriers was suppressed, leading to a hot electron injection-dominated CB nature of TiNR rather than a plasmoninduced electromagnetic field variation. Recently, XANES measurement was performed to study the energetics of the hot electrons in TiO 2 injected from Au in the Au@TiO 2 core-shell nanoparticles [ Fig. 5 72 The experimental results combined with the theoretical predictions show that plasmonic hot electrons exhibited a nonthermalized distribution in TiO 2 after they transferred to TiO 2 from Au, as shown in Fig. 5(c). Furthermore, the energetic distribution of the injected hot electron in TiO 2 was dependent on the shape of the Au core. Transient absorption spectroscopy (TAS) Understanding the kinetics of the generation and transfer of plasmonic hot carriers is very important in designing plasmonic materials and devices. The generation of plasmonic hot carriers and energy relaxation through electron-electron and electron-phonon scattering in metals all occur on a timescale of several femtoseconds to picoseconds. 76 Additionally, the plasmonic hot electron injection from the metals into the neighboring semiconductors or adsorbates is also very fast, typically at the sub-picosecond timescale. Ultra-fast time-resolved pump-probe measurements, such as transient absorption spectroscopy (TAS), can be used to analyze the lifetime or kinetics of hot carriers in plasmonic materials. 50,52 In TAS [ Fig. 6(a)], electrons of the sample are promoted to an excited state via a pump pulse. A second pulse with a weaker intensity (white or monochromatic light), functioned as a probe, penetrates through the analyte with a time delay (τ 1 ), as compared with the pump pulse. The absorption difference between the ground and excited states of the analyte will occur because of a change in the carrier population resulting from the excitation or the transfer of hot electron. This absorption difference is recorded with the various time delays (τ 2 , τ 3 . . .) in which one can track the charge carrier evolution with the time [ Fig. 6(b)]. Additionally, if one collects the absorption under the same delay time but using the probe pulse with a continually varied wavelength, a spectrum of absorption (charge carrier population) with the wavelength can be obtained, which also provides information on the kinetic process at the timescale of femtosecond. 77,78 As mentioned above, the plasmon is dephased through radiative decay (photon emission) or/and non-radiative decay (Landau damping). Plasmonic hot electrons are generated by Landau damping (<100 fs). The generated hot electrons with a non-thermal distribution are relaxed to the Fermi electron distribution through several steps (Figs. 1 and 3): 13,28,77 (i) electron-electron scattering during Landau damping (<100 fs), (ii) continued electron-electron scattering (100 fs-1 ps and transformation of the non-thermal distribution to the thermal distribution), (iii) electron-phonon scattering (1-10 ps, transfer of energy from electrons to the lattice, and cooling of hot electrons), and (iv) phonon-phonon interaction (∼100 ps-10 ns and heat dissipation through the lattice). The timescale of electron-electron scattering relaxation for small metal nanoparticles is estimated to be ∼200 fs, 21 leading to the transformation of the non-thermal distribution to the thermal (Fermi-Dirac) distribution. In the thermal distribution, only a small portion of electrons are on the high energetic level. Therefore, it is better to finish the hot electron transfer from a metal to a semiconductor before relaxation to the thermal distribution (Fig. 3). A recent study shows that the hot electron injection process in the Au-TiO 2 heterojunction becomes inefficient after electron-electron scattering relaxation (at the timescale of ∼100 fs). 50 Furube and coworkers utilized the visible-pump/infrared-probe TAS for resolving The Journal of Chemical Physics PERSPECTIVE scitation.org/journal/jcp the lifetime of charge transfer between Au and TiO 2 . 50 Their results indicated that the hot electron injection occurred within 220 fs. After the indirect transfer in a metal-semiconductor heterojunction, both the non-thermal distribution and the thermal distribution are possible for the injected electrons in the semiconductor, and non-thermal distribution can be dominant in many cases. 72 On the other hand, the lifetime of plasmonic hot electrons in the metal alone is very short. However, after hot electrons are transferred into a semiconductor, their lifetime could be extended due to the reduced electronelectron and electron-phonon scattering. Naldoni et al. revealed that the sites of Au-NPs on TiO 2 brookite nanorods had a significant influence on the decay time by TAS. 53 In the case of Au-NPs distributed throughout the thickness of the TiO 2 thin film, the hot carrier lifetimes were found to be on the order of a few hundreds of femtoseconds. The short lifetime was because the Au-NPs served as the recombination centers for excited carriers. However, in the case of Au-NPs decorated on the top surface of the TiO 2 thin film, the lifetime of hot electrons was 4 orders of magnitude longer due to efficient hopping on brookite lateral facets. A. Sensing Plasmonic hot electrons can bring new effects on physicochemical processes, such as physisorption or chemisorption of chemicals, selective oxidation, and direct reduction of adsorbates. The interaction can occur directly between a metal and the adsorbed molecules or between a metal and an adjacent semiconductor. The interactions may result in (i) transformation of adsorbates, (ii) fluctuation of hot electron-induced current or conductivity, and (iii) modification of the SPR-related optical properties (transmittance and adsorption). By transducing these interactions, various sensors can be designed based on plasmonic hot electrons accordingly. The plasmon-mediated chemical transformation has been welldemonstrated. Transformation of chemical species could change the dielectric environment of the plasmonic nanostructures, modulating the SPR-related optical properties. This, in turn, affects the generation of plasmonic hot electrons. Thus, photoelectric sensors can be designed. Gas sensors Generally, resistive gas sensors are designed based on the Schottky junctions in which the resistivity can be modulated either by the Schottky barrier or by the electron flow across the Schottky barrier. [79][80][81][82][83][84] Taking a hydrogen sensor as an example, Nienhaus et al. devised a metal-semiconductor Schottky diode for hydrogen detection. 79 They fabricated an ultrathin silver film (5 nm thick) that was comparable in thickness to the mean free path of hot electrons in n-Si (111), which was used for the selective detection of atomic hydrogen [ Fig. 7(a)]. The hydrogen atoms impinged on the Ag film and underwent exothermic adsorption, creating hot electrons. The generated hot electrons then traveled ballistically through the Ag film and traversed the Schottky barrier into n-Si (111). Hot electrons were thus detected as a photocurrent, which was proportional to the hydrogen adsorption rate. The theoretical limit of detection (LOD) for such a sensor was down to 2 × 10 10 H atoms cm −2 s −1 , depending on the noise level (∼4 pA). This protocol was further utilized for the detection of hot electron flow with the Pt-loaded Au/TiO 2 Schottky diodes during oxidation of CO and hydrogen. 80,81 Halas group also reported that the plasmonic hot electrons could be transferred from Au-NPs into a Feshbach resonance of an H 2 molecule adsorbed on the Au surface, leading to dissociation of H 2 molecules under visible-light excitation at room temperature. 85 Following this mechanism, a plasmonic optical hydrogen sensor has been developed. 86,87 The hot electrons made the dissociated hydrogen atoms adsorb and diffuse into the thermally dewetted Au nanohemispheres, leading to the formation of a metastable Au hydride (AuHx) with a different dielectric constant from Au [ Fig. 7(b)]. 86 The change in the dielectric constant shifted the LSPR position of the particles and induced a ∼1%-2% change in optical transmission of the thin film. Cattabiani et al. demonstrated a conductometric H 2 sensor at room temperature using the Ag-NPs-decorated SnO 2 nanowires [ Fig. 7(c)]. 87 The plasmonic hot electrons from the Ag-NPs promoted the conversion of the adsorbed O 2 − into highly reactive O − , which enhanced the catalytic dissociation of H 2 , leading to an increment in current. Similarly, plasmonic hot electrons have been harnessed to dissociate the adsorbed gaseous molecules for the detection of other gaseous adsorbates, such as ammonia and acetylene. By utilizing an Au@ZnO-loaded porous silicon film, the photocurrent responded to a change in the ammonia concentration (down to 50 ppm) at room temperature. 88 Moreover, the Au-decorated ZnO nanowires as a transducer exhibited a concentration-dependent and timedependent p-n transition response for the detection of C 2 H 2 gas at room temperature. 83 Biosensors Recently, plasmonic hot electrons were employed for biosensing. [89][90][91][92] One design is the utilization of plasmonic hot electron current as the sensing signal. Adsorption of biomolecules can modulate the sensing signal by turn on or off the electron-transfer channel in the circuit. For example, Qiao et al. reported selective detection of bisphenol A (BPA) in drinking water and liquid milk samples with a label-free photoelectrochemical aptasensor, 89 as illustrated in Fig. 8(a). Under light illumination, the Au/ZnO heterojunction exhibited a higher photocurrent than that of a pure ZnO nano-pencil, which was induced by plasmonic hot electrons from the excited Au nanoparticles. Selectivity was achieved by specific binding of BPA to its aptamer. When BPA was present in the assay, binding of aptamer to Au resulted in the blockage of photogenerated electron-transfer channels. Such a photoelectrochemical aptasensor showed a linear relationship with the BPA concentration in the range of 1-1000 nmol L −1 at a limit of detection (LOD) of 0.5 nmol L −1 . In addition, plasmonic hot carriers can be used for improving the sensitivity of electrochemical sensors. Plasmonic hot carriers can participate in the anodic oxidation or cathodic reduction, enhancing the electrochemical current. On the other hand, in the plasmonic metal/semiconductor composite structure, the plasmonic hot electron injected into the semiconductor from the metal can improve the charge carrier concentration, improving the responsive current of the sensor. For example, a plasmon-enhanced glucose electrochemical sensor was constructed based on the plasmon-accelerated electrochemical reaction (PAER) mechanism. 91 In this sensor [ Fig. 8(b)], the energetic hot carriers were generated on the Au NP surface. At a suitable voltage, hot holes can be effectively depleted by oxidation of glucose due to their matched energy levels, which would effectively inhibit the electron-hole recombination. Thus, the generated hot electrons would be driven to the external circuit, producing an observable current. Because the PAER system is Schottky-junction-free, the hot carriers can be harnessed more efficiently, ensuring a higher efficiency toward glucose electro-oxidation. As a result, when LSPR was excited, the photocurrent increased due to glucose electro-oxidation induced by LSPR [ Fig. 8(c)]. As a result, this biosensor showed an improved sensitivity and LOD [ Fig. 8(d)]. B. Photodetection Photodetectors are designed to transform the incident light (in a specific spectral range) into an electrical signal. The operating principle is based on the photovoltaic effect or photoconductivity modulation. In particular, the field-effect transistor (FET) was introduced for highly efficient photodetection. Because photodetectors are made of semiconductors such as Si, Ge, InGaAs, and PbS, the sensitive wavelength bandwidth is limited by the fixed bandgap of the specific semiconductors. To tune or extend the photodetection spectral range of detectors, hot electrons are utilized to enhance or generate the internal photoemission in photodetectors. 93 In addition, plasmonic hot electrons can be involved into photocurrent to enhance the photo-response intensity, leading to an improvement in sensitivity of detectors. Internal photoemission The Schottky diode and the Metal-Insulator-Metal (MIM) diode are two common structures used for internal photoemission. In the former structure, 94 a Schottky barrier (Φ B ) is formed at the metal/semiconductor interface. When incident photon's energy (E = hν) is larger than the Schottky barrier height, plasmonic hot electrons can be injected into the semiconductor across the barrier [ Fig. 9(a)], generating a photocurrent. The photodetection limit is determined not only by the bandgap of semiconductors but also by the height of the Schottky barrier. Generally, the Schottky barrier height is less than 1 eV, which is determined by the difference between the work function of the metal and the electron affinity of the semiconductor. 95 Thus, the plasmonic hot electron injection mechanism can be used to extend the detection bandwidth of the photodetector. In the MIM structure shown in Fig. 9(b), 96,97 the insulator interlayer is very thin, and hot electrons in the top metal can tunnel through into the bottom metal via the quantum tunneling effect and vice versa to produce a photocurrent. By creating asymmetric absorption or asymmetric barrier heights formed by using different metals or applying a bias, 98 the tunneling current from one side to the other is thus dominant, leading to a measurable net current. In the MIM model, the insulator layer can be replaced with a semiconductor layer. In addition, one of the metal layers can be changed into a transparent conducting oxide layer with a similar physical working principle to the true MIM case. [99][100][101] However, the internal quantum efficiency in internal photoemission is very low, limiting their applications. The low efficiency is ascribed to the weak light absorption of the metal, which consequently induces a small quantity of low energy hot electrons, and huge energy losses for hot electrons during diffusion in the metal and across the interface. The energy and momentum distribution of the plasmonic hot electrons can be tuned by the size, shape, and medium. In this way, the responsivity peak, bandwidth, and polarization dependence of the device based on the plasmonic hot-electron internal photoemission can be tuned by controlling the SPR of the metal nanostructures. 102 By taking advantage of great tunability of plasmon, various photodetectors have been developed based on different plasmonic modes such as LSPR, 103-105 SPP excitations, 106 grating-coupling, 102,107,108 waveguides, [109][110][111][112][113] and plasmonic MIM structure. 97 tuning the geometry of the Au nanorods (aspect ratio), the SPR absorption spectrum and the photoresponsivity can be modulated. Gao et al. also directly showed that changing the barrier height could modulate the photocurrent in a porous Ag/TiO 2 array-based nearinfrared photodetector [ Fig. 10(b)]. 117 They found that a Schottky barrier was formed by chemisorbed oxygen and could be reduced by reducing the amount of chemisorbed oxygen using ultraviolet (UV) light as the gate input. In this way, the height of the Schottky barrier could be modulated by the extra gate light illumination, resulting in several to one hundred times enhancement of plasmoninduced photocurrent. Pescaglini et al. reported an Au nanorod-ZnO nanowire hybrid system, showing a large photoresponse under light irradiation in the spectral range of 650-850 nm, accompanied by an "ultrafast" transient photoresponse at a timescale of 250 ms. 118 Plasmonic waveguide-based photodetector In gratings with periodically spaced slits or a nanohole array, the incident light can be coupled strongly to SPP, leading to strong and resonant absorption. The absorption frequency can be modulated by the nanostructure parameters such as the periodicity and slit dimension. When a grating array is combined with the semiconductor substrate, photocurrent will be generated due to the SPP-induced hot electron ejection from the metal to semiconductor. 102,107,108 For example, Sobhani et al. reported a grating-based photodetector device, 102 consisting of Au gratings on an n-type silicon substrate [ Fig. 10(c)]. It showed 0.6 mA W −1 without an external bias at an internal quantum efficiency of ∼0.2%, which was 20 times larger than that of the nano-antenna-based device. Additionally, the wavelength-dependent spectrum was three times narrower than the nano-antenna-based device. By tuning the grating parameter, it was easy to linearly tune the responsivity peak between 1295 nm and 1635 nm. This tunability opens up the possibility for the design and fabrication of the plasmonic detector driven in a narrow bandwidth of infrared light. A waveguide-based photodetector design can produce higher internal quantum efficiency than antenna-based designs. 95,[109][110][111][112][113] Because SPPs propagate and excite hot electrons along with the metal/dielectric interface, a shorter diffusion pathway is expected for hot electron injection into the semiconductor. Moreover, the electric field component of SPPs can be perpendicular to the Schottky interface. Consequently, the generated hot electrons preferentially have a momentum component perpendicular to the metal/dielectric interface, which improves the hot electron transfer probability. For example, Goykhman et al. reported a silicon waveguide-based SPP Schottky photodetector [ Fig. 10(d)]. 109 The photodetector was fabricated on an insulator substrate using a self-aligned approach of local oxidation of silicon. It showed an internal responsivity of 0.25 mA/W and 13.3 mA/W under irradiation at wavelengths of 1.55 μm and 1.31 μm, respectively. Plasmonic MIM photodetector In MIM photodetectors, one of the metal layers can be replaced with a plasmonic metal nanostructure. Plasmonic hot electrons will be generated and subsequently tunnel across the thin oxide insulator layer to the other metal film to produce a photocurrent. These detectors can operate without an external bias at room temperature. Plasmonic FET photodetector Recently Kojori et al. reported a plasmon-modulated fieldeffect transistor (FET) photodetector based on a back-gated thin film transistor with a plasmonic metal nanostructure (Fig. 11). 119,120 Plasmonic Au nanoparticles were incorporated into the active channel of the transistor, consisting of a source, drain, and gate, as shown in Fig. 11(a). The plasmon-induced hot electrons were injected into the ZnO film channel, consequently increasing the channel conductivity and drain current. As a result, the drain current can be modulated by applying an external electric field and varying the gate voltage [ Fig. 11(b)]. Without a gate voltage bias, only the plasmon-induced hot electrons, which possess sufficient energy for overcoming the interfacial Schottky barrier, can be injected into the semiconductor and collected as the drift current. With a gate voltage bias, an electron accumulation layer (n-channel) was formed inside the semiconductor layer close to the mediator layer. The spatial variation in electron density generated a large potential gradient from the floating ground (metal nanostructure) to the electron accumulation region. The internal electric field created by the gate bias facilitated the electrons to move to the other boundary where the FET channel was located. The migrating hot electrons were driven by the gate voltage, contributing to the channel enhancement and allowing more drain current to flow. C. Solar energy conversion To address the grand challenges in energy and ecology sustainability, 121 increasing efforts are being made to develop photovoltaic devices, photocatalysts, and photoelectrochemical cells (PECs). 122 Although organic perovskites and tandem-structured inorganic III-V compounds have achieved a high solar conversion efficiency, they are expensive or/and unstable in operation. Hence, alternative materials such as silicon and metal oxides are still under development for solar energy devices. These materials cannot meet all the requirements for practical applications due to one or more shortcomings among large bandgap, low charge mobility, high-density surface trap states, and indirect bandgap feature. For example, wide bandgap semiconductors such as TiO 2 are stable and inexpensive, but they can only absorb UV light, which accounts for only ∼5% of solar radiation. To extend the light absorption spectral range and to improve the energy conversion efficiency of semiconductors, plasmonic metal nanostructures are incorporated with semiconductors to form heterojunctions. Plasmonic metal nanostructures can adsorb visiblelight or infrared light and transfer plasmonic energy to the semiconductors, enhancing the charge separation and migration in the semiconductor. As summarized in Ref. 11 plasmonic energy transfer mechanisms, light scattering/trapping, plasmon-induced resonance energy transfer (PIRET), and hot electron injection. Wu et al. revealed that the balance between three mechanisms depends on the plasmon dephase and predicted the theoretical maximum efficiency of solar energy conversion in plasmonic metal-semiconductor heterojunctions. 14,15 If a plasmonic metal nanostructure is utilized to absorb incident light in an extended spectral range and to transfer the absorbed light energy to adjacent molecules or semiconductors, this nanostructure is called "plasmonic photosensitizer." 11,30 This section is focused on the hot electron injection mechanism and shows how to enhance photovoltaic and photocatalytic performance with plasmonic photosensitizers. Photovoltaic devices utilizing plasmonic hot carriers In nature, both photovoltaic devices and photocatalysis require two common steps: light harvesting and charge separation. In most cases of plasmonic solar cells, plasmonic metals are used as light antennas for light scattering/trapping to improve photovoltaic performance by enhancing light harvest, whereas application of the plasmonic hot carrier transfer mechanism is limited for the low transfer efficiency. 123,124 To utilize hot carriers effectively, at least three factors need to be considered: (i) incident light can efficiently excite hot carriers in the plasmonic metal nanostructure, (ii) plasmonic hot carriers are efficiently separated and extracted before they are cooled to the equilibrium state, and (iii) recombination including thermal relaxation and charge back transfer is suppressed as far as possible. Meeting the three conditions cannot guarantee a high energy transfer efficiency. The excited hot electrons suffer fast relaxation at the timescale of hundreds of femtoseconds and the interface barrier of the metal/semiconductor, resulting in only a small percentage of hot electrons that can be effectively extracted to the semiconductor. 21,63,125 Therefore, it is essential to construct an appropriate interface to enable rapid charge separation and transfer. A sandwich structure, electron transport material/plasmonic metal/hole transport material, is a typical design for a plasmonic photovoltaic device. Generally, n-type semiconductors (e.g., TiO 2 ) are used for hot electron collection and transport, while p-type semiconductor or other hole transport materials such as poly(N-vinylcarbazole) (PVK) are used for hot hole collection and transport. A Schottky junction consisting of the semiconductor and plasmonic metal nanostructure is the start point for harvesting hot electrons. For example, Konstantatos et al. designed a metal/insulator/semiconductor heterostructure [ Fig. 12(a)] 126 in which the Ag nanostructure electrode can generate LSPR and the 1 nm thick Al 2 O 3 space layer can passivate the interface states. In this photo-diode device, plasmonic hot electrons can enter TiO 2 across the Au/Al 2 O 3 /TiO 2 Schottky junction or through the Schottky emission, forming photocurrent [ Fig. 12(b)]. This plasmonic device has achieved an open-circuit voltage of 0.5 V, a fill-factor of 0.5, and a power conversion efficiency of 0.03% when using the nano-patterned Ag electrode. In addition, Mubeen et al. demonstrated a plasmonic photovoltaic device using the Au nanorod array/TiO 2 /Ti sandwich structure. 127 An open-circuit voltage of 210 mV was achieved with this plasmonic photo-diode device at a 50 nm thick TiO 2 layer. However, if only the Schottky junctions are utilized for separating charges that come from the plasmonic metal, the overall solar energy conversion efficiency of photovoltaic devices is too low. Therefore, the plasmonic hot electron injection process is used to enhance the performance of conventional photovoltaic devices, such as organic photovoltaics, Si solar cells, and dye-sensitized solar cells (DSSCs). For example, Au nanoparticles were embedded at the hole-conductor/semiconductor (spiro-OMeTAD/TiO 2 ) interface in a solar cell [ Fig. 12(c)]. 128,129 The Au nanoparticle played multiple roles. One of the roles was that the Au nanoparticles served as photosensitizers [ Fig. 12(d)] in which the plasmonic hot electrons were injected to TiO 2 and hot holes were transferred to spiro-OMeTAD (hole-conductor). The overall performance of the device was improved when the Au particle size was reduced from 40 nm to 5 nm, achieving a maximum absorbed photon-to-electron conversion efficiency (APCE) or an internal quantum efficiency (IQE) of 13.3%. Recently, the plasmonic Ag@TiO 2 @Pa (benzoic-acidfullerene bishell) sandwich nanoparticles were embedded in the active layer in a solar cell in which plasmonic hot electrons were extracted to the fullerene in the Pa, contributing to the photocurrent. 130 The Pa outer-shell was important to extract the charge; otherwise, the charge carriers would be trapped inside the nanoparticles without it. As a result, addition of plasmonic sandwich nanoparticles into the organic solar cell achieved a maximum power conversion efficiency of 13.0%, which enhanced the performance by 12.3%. Moreover, Zhang et al. introduced the Au-TiO 2 composite into an organic photovoltaic device, 131 which exhibited a notably enhanced transient photogenerated voltage peak of 0.56 V under light irradiation at 600 nm because of the plasmonic hot electron injection into TiO 2 . The group of Xiong also integrated the Ag nanoplates into a Sibased solar cell, Si-poly(3,4-ethylene dioxythiophene)/poly(styrene sulfonate) (PEDOT:PSS). 132 Hot electrons were generated in the Ag nanoplates at wavelengths of 550-1100 nm and transferred to the CB of Si. As compared with pristine photovoltaic cells, the current density and power conversion efficiency were improved by 28% and 40%, respectively, after addition of the Ag nanoplates. Plasmonic Au-TiO 2 core-shell structured photoanodes (Au@TiO 2 , SiO 2 @TiO 2 @Au, and SiO 2 @Au@TiO 2 ) have also been developed to improve DSSCs performance. 133,134 Photocatalysts utilizing plasmonic hot carriers With the excited hot carriers, a plasmonic metal can directly photocatalyze a chemical reaction on the surface or serves as a photosensitizer with the semiconductor to improve the photocatalysis performance. Typically, in the direct photocatalysis by a plasmonic metal nanostructure (metal-adsorbate complexes), the direct transfer pathway of hot carriers such as CID could be predominant [Figs. 2(b) and 3]. 11,34 In metal-adsorbate complexes, SPRmediated hot electrons can be injected into specific electronic states of molecules adsorbed on the surface of plasmonic metals. Alternatively, the occupied electrons in the adsorbate could be injected into the metal and recombine with the hot holes, i.e., the plasmonic hot holes may oxidize the adsorbate. 91,[135][136][137][138] The hot carrier injection induces various physicochemical processes such as molecular dissociation, desorption, or chemical reactions. 85,139 For example, hot electrons can be injected from the Au nanoparticles into the antibonding orbital (1σu * ) of the adsorbed H 2 molecule, leading to H 2 dissociation [ Fig. 13(a)]. 85 Indeed, dissociation of H 2 molecules was observed by exposing the Au nanoparticles to a mixture of H 2 and D 2 . The HD formation efficiency showed an instantaneous sixfold improvement under the laser excitation of the plasmon. The time-domain time-dependent density functional theory (TDDFT) and Ehrenfest dynamics simulations have also confirmed this process. 140 However, this process was suppressed due to sequential charge transfer when the H 2 molecule was located in the center of gap between the plasmonic metal dimer. Mukherjee, 141 Landry 142 and their co-workers prepared the Au nanoparticles and the Ag nanocubes for plasmonic activation of H 2 splitting, respectively. When the plasmonic materials were excited by light, transient charge carriers were generated at the surface. The resulting hot electrons were transferred to the adsorbed H 2 molecules to drive the dissociation of molecules. It is interesting that plasmonic nanoparticles can catalyze polymerization. 143,144 In this process, plasmonic hot electrons induced the binding of an olefin monomer onto the Au nanoparticle surface and generated the radicals. The radical modulated the polymerization, leading to a polymer coating on the metal nanoparticle. In situ polymerization on plasmonic nanostructures could open a new avenue for patterning polymers on the nanoscale. Moreover, environmental organic contaminants can be decomposed on plasmonic metal nanoparticles via hot carrier injection to organic compounds. 11 Zhang et al. 145 prepared Au nanoparticles on the zeolite support for selective photo-oxidation of benzyl alcohol to aldehydes, as shown in Fig. 13(b). The electronic polarization of the plasmonexcited Au nanoparticles was proposed to induce a high electronegativity for abstracting protons from the cleavage of C-H bonds in benzyl alcohol molecules. In addition, plasmonic hot electrons on Au nanoparticles were transferred to molecular oxygen to produce activated superoxide radicals (O 2 •− ), leading to decomposition of benzyl alcohol molecules. In addition, plasmonic Au nanoparticles showed photocatalytic activity toward the multi-electron and multiproton reduction of CO 2 . 146 The results show that plasmonic hot electrons drove the formation of the C1 and C2 hydrocarbons on the Au nanoparticle surface. Moreover, ionic liquids can stabilize the high-energy CO 2 •− radical intermediates formed at the nanoparticle/solution interface and facilitate the hot electron transfer from the Au nanoparticle to the adsorbed CO 2 molecules. 147 Recently, increasing attention was paid to the photocatalytic oxidation of plasmonic hot holes. [135][136][137][138] For example, Schlather et al. investigated the hot hole-induced oxidation of citrate ions on the Au@SiO 2 @Au core-shell nanoparticle electrode [ Fig. 13(c)]. 135 They found that charge transfer to the adsorbed citrate was most efficient at high photon energies. However, hot holes generated via plasmon decay can also oxidize the citrate if their energetic level is overlapped with the citrate HOMO level. In the above-mentioned plasmonic metal/adsorbate systems, plasmonic hot electrons are injected into the reactants and get involved in a chemical reaction, where the plasmonic metals serve as the active reaction sites. To extend the photocatalysis application of plasmonic metals, metal/metal heterostructures have been designed toward the selective catalysis of specific reactions. For example, Zheng et al. designed a plasmonic Au/Pt heterojunction photocatalyst 148 in which Pt nanoparticles were decorated on the two ends of individual Au nanorods. Plasmonic hot electrons, which were excited in the Au nanorods, were transferred to the Pt nanoparticles, driving water reduction to hydrogen. Meanwhile, hot holes were extracted out of the Au nanorods to drive methanol oxidation. The heterostructure facilitated the separation of electrons from holes and improved surface catalysis for hydrogen evolution. In addition, Aslam [150][151][152] In the Au-Pd complex, the supralinear power dependence suggested that the hot carriers induced H 2 desorption at the Pd island surface under light illumination. When acetylene was present along with H 2 , the production selectivity for ethylene relative to ethane was strongly enhanced, approaching 40:1. In plasmonic metal-semiconductor heterostructures, plasmonic hot carriers transfer into the semiconductor through either a direct or an indirect transfer pathway, 11 where the semiconductor serves as the active reaction site. A suitable band alignment between the metal and the semiconductor can facilitate the extraction of plasmonic hot carriers and participation in the catalytic reaction on the semiconductor surface. Nishijima et al. deposited a gold nanorod pattern on the TiO 2 film as the photoanode in a PEC 56 in which hot electrons were injected into TiO 2 and hot holes were extracted to oxidize water molecules. The presence of an Au nanorod array on TiO 2 extended the light absorption range from the UV light to the near-infrared light region, and the photocurrent was generated in the spectral range up to 1200 nm. Primo et al. 153 demonstrated the enhanced photocatalytic oxygen generation by depositing Au nanoparticles on ceria (CeO 2 ). CeO 2 is a wide bandgap semiconductor that cannot be excited by visible-light irradiation. However, the oxygen evolution rate reached 10.5 mmol h −1 for the gold-supported CeO 2 particles under visible-light illumination. The visible-light photocatalytic activity was attributed to the injection of hot electrons from the Au photosensitizer into the CB of CeO 2 . The holes from the Au nanoparticles drove the water oxidation reaction to produce oxygen gas. In addition, Chen and co-workers 73 synthesized the gold nanospheres on a ZnO nanorod array, which served as a photoanode in a PEC. The photocurrent under visible-light illumination was ascribed to hot electron injection to ZnO from the gold nanospheres [ Fig. 14(a)]. The Fowler theory was utilized to evaluate the number of photoelectrons with enough energy to overcome the Schottky barrier at the Au/ZnO interface. Figure 14(b) shows the photocurrent as a function of the wavelength of incident light, which partially matched Fowler's law. Nevertheless, the photocurrent diverged from Fowler theory with the deviation, which was associated with LSPR. PERSPECTIVE scitation.org/journal/jcp The results indicated that photocurrent was dominantly composed of hot electron flow and enhanced by LSPR. The mean-free path of hot electrons in a metal is approximately 20-30 nm. 70 The previous studies also showed that rapid charge recombination (<1 ns) occurred between the injected hot electrons in the semiconductor and the holes in the plasmonic metal in Au/TiO 2. 50,52 Hence, it is important to mitigate trapping and recombination of hot carriers in photocatalysts. In many cases, plasmonic metals were synthesized with capping ligands, which act as the electrical resistors for charge transfer. Recently, the ligandfree Au nanoparticles were coupled to TiO 2 for photocatalytic dye degradation and hydrogen evolution. 154,155 Besides the plasmonic metal surface, the distribution and architecture of plasmonic metal nanoparticles and semiconductors in heterojunctions also need to be considered in design. In a conventional heterojunction structure, metal nanoparticles are randomly distributed in the crystal domains of the semiconductor. The disordered crystal domains cause additional energy loss during the hot-electron transfer process. In contrast, Bian et al. synthesized a TiO 2 superstructure with an ordered configuration, 61 which provided an efficient charge migration pathway, suppressing the charge recombination. Time-resolved diffuse reflectance spectroscopy revealed that the time constant for electron accumulation in the Au/meso-TiO 2 system was 5.6 min, which was much longer than the conventional counterparts. This was responsible for an improved photocatalytic activity toward organics degradation and hydrogen generation. In addition, Li et al. revealed that a core@shell structure of plasmonic metal@semiconductor nanoparticles allowed for strong coupling between the plasmonic metal core and the semiconductor shell. 156 As compared to the Ag core alone, the Ag@Cu 2 O nanoparticles exhibited an extraordinarily large red-shift in the LSPR band due to the high refractive index of Cu 2 O, and the LSPR band can be tuned sensitively by tailoring the Cu 2 O shell thickness [ Fig. 14(c)]. As a result, the optimized Ag@Cu 2 O composite nanoparticles showed a largely extended light absorption spectral range as compared to Cu 2 O alone. Indeed, the light absorption spectral range of the Ag@Cu 2 O core-shell nanoparticles covered the entire visible-light region. Owing to the large intimate metal/semiconductor interfacial area, the Cu 2 O shell can effectively harness the plasmonic hot electrons from the Ag core and strongly interact with Ag through near-field coupling. 156 Therefore, the Ag@Cu 2 O core-shell nanoparticles exhibited extraordinary photocatalytic activity toward methyl orange decomposition under visible-light radiation [ Fig. 14(c)], which was comparable to that under the UV-light illumination. It is worth noting that the plasmonic hot electron injection process does not occur alone in many cases. It may function along with other plasmonic energy transfer mechanisms such as PIRET to enhance the performance of materials and devices. 156 Use of co-catalysts is an additional route to help the hot carrier extraction and to suppress the charge recombination. Mubeen et al. 57 prepared an autonomous photocatalytic water splitting device using the aligned gold nanorods as light antennas, capped with a TiO 2 layer to form a metal-semiconductor Schottky junction [ Fig. 14(d)]. The TiO 2 cap was modified with platinum nanoparticles, which serves as the hydrogen evolution co-catalyst. The Au nanorod surface was decorated with a cobalt-based oxygen evolution catalyst (Co-OEC). Under visible-light irradiation, plasmonic hot electrons were injected from the Au nanorods to the TiO 2 layer and migrated to the Pt nanoparticles, driving the hydrogen evolution reaction. The hot holes in the Au nanorods were extracted to the Co-OEC catalyst, oxidizing water to oxygen gas. To suppress hot carrier recombination, another strategy has been developed to tailor the surface properties of a photocatalyst by controlling the selective exposure of different crystal facets. 157,158 The different facets of the photocatalyst showed varying catalytic performance due to the differences in the specific energy level of the facets. Accordingly, the oxidation and reduction co-catalysts were decorated on the different specific facets of the semiconductor, respectively. For example, Bai et al. specifically deposited the Ag and Pd nanoparticles on the (001) and (110) facets of the BiOCl photocatalyst, respectively. 158 In this design, the Ag nanoparticles served as the plasmonic light antennas, while the Pd nanoparticles acted as the co-catalyst. The spatial The Journal of Chemical Physics PERSPECTIVE scitation.org/journal/jcp distribution of metal nanoparticles on different facets improved the charge separation. Selectivity is an important issue in heterogeneous catalysis. It is essential to fully control the reaction pathway, the product composition, and the chemical transformation yield. Hot carriers can be utilized to improve the selectivity of chemical reactions. The energy distribution of hot carriers can be tuned by the material, size, and shape of plasmonic nanostructures. [34][35][36]159,160 The tuned hot carriers can be injected into the specific energetic level of the reactants, achieving reaction selectivity. Boerigter et al. reported that the direct hot electron transfer process enabled selective chemical reactions through tuning the energetic level of hot electrons. 34,36 The oxygen reduction reaction (ORR) generally involves a four-electron transfer process to generate water. However, undesired by-products such as hydrogen peroxide (H 2 O 2 ) could be produced via a two-electron transfer pathway. To suppress the side reaction, the Ag-Pt bimetallic nanocages were prepared as the ORR catalysts [ Fig. 15(a)]. 160 The hot electrons transferred from Ag to Pt, which induced a higher electron population in the antibonding states of O 2 adsorbates. The antibonding electron population facilitated the breaking of the O-O bonds in the O 2 molecules and the production of desirable H 2 O. Light intensity can also tune the product selectivity. 146 As shown in Fig. 15(b), hot electrons were transferred from the Au nanoparticle to the adsorbed CO 2 , forming a radical ion intermediate, CO 2 •− (or its hydrogenated form), which was the rate-determining step (RDS) for C1 generation. After a cascade of hot electron-and proton-transfer steps, CH 4 was achieved. However, under high-intensity of light illumination, the hot electron transfer rate became large, and a multiple-electron transfer process took place within the surface-adsorbed CO 2 , which activated two CO 2 adsorbates simultaneously. The formed CO 2 •− intermediate pair can undergo C-C coupling. As a result, C 2 H 6 was formed after a series of hot electron and proton transfer processes [ Fig. 15(c)]. IV. REMARKS AND OUTLOOK Research on plasmonics has made great progress in the past two decades. Discovery of new theories has advanced the fundamental understanding of plasmon and hot carriers (Fig. 16). In particular, TAS and XANES have made new discoveries or confirmed the theoretical predictions. Rapid development in synthesis and fabrication techniques has resulted in the availability of systematically designed plasmonic materials and devices, which has not only widened the scope of applications of plasmon and hot carriers but also advanced experimental discoveries. The timescales for hot carrier generation, emission, and transfer are very short, leading to significant challenges for fundamental study and applications. Plasmonic hot carriers function at the timescale of femtosecond to picosecond (<10 ps) before thermal equilibration. In the aspect of application studies, the timescales for chemical reactions are much longer (typically a few hundred femtoseconds or above) than the lifetime of the plasmonic hot carriers. It is very important to collect hot electrons timely or/and to extend the lifetime of the hot carriers when designing plasmonic nanostructures to enable hot carriers to participate in photochemical reactions. Indirect and direct hot electron transfer processes have been utilized to design materials and devices. Besides these processes, surface catalysis, surface passivation, and heating effects may function together with plasmonic effects, which contribute to the enhanced performance of materials and devices. This complicates the separation of the role of plasmon from other effects during operation of materials and devices. As such, significant effort must be made to separate these mechanisms or effects to fully understand and utilize all these effects effectively. Indeed, several articles have presented the specific methodologies in detail for identifying and separating these mechanisms. 11,15,[161][162][163] In a plasmonic heterojunction, there are three main mechanisms of energy transfer from a plasmonic metallic nanostructure to a semiconductor or other materials, 11 that is, light scattering/trapping, plasmon-induced resonance energy transfer (PIRET), and hot carrier injection. It was observed from the experimental results that the hot carrier injection mechanism showed a far lower energy transfer efficiency than the other two processes. For photodetectors, chemical sensors, and bio-sensors, high signal output such as photocurrent is not that demanding. Therefore, the application of plasmonic hot electrons in these devices is promising. For plasmonic solar energy conversion devices, high photocurrent and high energy conversion efficiency are critical to commercialization of these devices. In the direct hot electron transfer pathway, the theoretical maximum efficiency is predicted to be as high as 50%. In the indirect hot electron transfer pathway, the theoretical maximum transfer efficiency is predicted to be 8%. However, in the experiments performed so far, the achieved efficiency is much lower than the theoretical maximum in both cases. Plasmonic hot electrons can be generated abundantly. However, it still remains a significant challenge in effectively harnessing plasmonic hot electrons for solar energy conversion. Theoretical understanding of plasmonic hot electron transfer, structure design, material synthesis, and interface engineering is necessary to improve the energy transfer efficiency and finally meet the demand of practical applications. Compared with the research on plasmonic hot electrons, the studies on hot holes are limited. In reality, hot holes are equally as important as hot electrons. Few studies have shown the considerable effects of the hot hole. 18,164,165 Hot holes can be extracted out The Journal of Chemical Physics PERSPECTIVE scitation.org/journal/jcp and involved into photoelectrochemical processes. 16,91,[135][136][137][138] It is interesting that plasmonic hot holes can be extracted to either an ntype semiconductor or a p-type semiconductor. After plasmonic hot holes are extracted to an n-type semiconductor, they can drive an oxidation reaction. After plasmonic hot holes are transferred to a ptype semiconductor, they can increase the open voltage and stabilize holes. 137 To identify the roles of hot holes and the associated mechanisms, the methods used for hot electrons can be applied to hot holes under scrutiny. More details on the applications of plasmonic hot holes can be seen in a recent review article by Tatsuma. 137 Currently, the main plasmonic materials are gold and silver, which are expensive. It is necessary to develop inexpensive plasmonic materials through fundamental study and improved fabrication technology. Copper nanostructures and heavily doped semiconductors are promising candidates. Recent research has revealed that the copper nanocubes have a strong and narrow LSPR absorption band. 166,167 In addition, if a semiconductor is heavily doped, especially with oxygen vacancies, the free charge carrier concentration may increase to even higher than 10 21 cm −3 . The doped semiconductor will then manifest SPR under visible-light or infrared-light excitation. [168][169][170] Plasmonic semiconductors can generate hot carriers by themselves. Probably, it can produce higher energy transfer efficiency in the plasmonic semiconductor-regular semiconductor heterojunctions because there may be less energy loss via the indirect transfer pathway across the interface or via the direct transfer pathway with easier orbital hybridization. Recently, plasmonic semiconductors have triggered intensive interests. Plasmonic semiconductor nanostructures based on Cu 2−x S, WO 3−x , and MoO 3−x have been used for SERS, photocatalysis, and water splitting. However, fundamental research, device design, and fabrication of plasmonic semiconductors are just emerging. AUTHORS' CONTRIBUTIONS H.T. and C.J.C. contributed equally. All authors contributed to this work and have given approval to the final version of this paper. ACKNOWLEDGMENTS This work was partially supported by the Armstrong/Siadat Professorship Endowment for N.W., the Ministry of Science and Technology of Taiwan (Contract No. MOST 107-2113-M-002-008-MY3) for R.S.L., and the National Natural Science Foundation of China (Grant Nos. 51972308 and 21673245) for H.T. and Z.H., respectively. DATA AVAILABILITY Data sharing is not applicable to this article as no new data were created or analyzed in this study.
14,337.4
2020-06-14T00:00:00.000
[ "Physics", "Materials Science", "Environmental Science", "Chemistry" ]
Radiative Effects of Water Clouds on Heat , Cloud Microphysical and Surface Rainfall Budgets Associated with Pre-Summer Torrential Rainfall This study investigates thermal, cloud microphysical and surface-rainfall responses to the radiative effects of water clouds by analyzing two pairs of two-dimensional cloud-resolving model sensitivity experiments of a pre-summer heavy rainfall event. In the presence of the radiative effects of ice clouds, exclusion of the radiative effects of water clouds reduces the model domain mean rain rate through the mean hydrometeor increase, which is associated with the decreases in the melting of graupel and cloud ice caused by enhanced local atmospheric cooling. In the absence of the radiative effects of ice clouds, removal of the radiative effects of water clouds increases model domain mean rain rate via the enhancements in the mean net condensation and the mean hydrometeor loss. The enhanced mean net condensation and increased mean latent heat are related to the strengthened mean infrared radiative cooling in the lower troposphere. The increased mean hydrometeor loss associated with the reduction in the melting of graupel is caused by the enhanced local atmospheric cooling. INTRODUCTION Cloud radiative processes play an important role in the development of precipitation systems.The cloud-radiation interaction could lead to the destabilization of the environment (e.g., Dudhia 1989), the unstable thermal stratification of stratiform clouds in the upper troposphere (e.g., Lilly 1988), and the development of second circulation by the different radiative heating between cloudy and clear-sky regions (e.g., Gray and Jacobson 1977).Tao et al. (1993) found that infrared radiative cooling enhances surface rainfall in their cloud-resolving model simulations of precipitation systems in the tropics and mid-latitudes.Fu et al. (1995) revealed that clear-sky infrared radiative cooling enhanced rainfall while weakened infrared radiative cooling due to stratiform clouds reduces rainfall.The nocturnal rainfall peaks in the diurnal variations of surface rainfall result from the decreased saturation mixing ratio due to the falling temperature caused by the infrared radiative cooling (e.g., Sui et al. 1997Sui et al. , 1998;;Gao and Li 2010).Wang et al. (2010) and Shen et al. (2011a, b) studied cloud radiative effects of pre-summer heavy rainfall processes and found that the rainfall responses to cloud radiative processes depend on cloud type and stage of convective development.The exclusion of cloud radiative effects increases the mean rainfall during the onset and decay phases, but it reduces the mean rainfall during the mature phase (Shen et al. 2011b).The removal of the radiative effects of ice clouds reduces the mean rainfall during the onset phase, whereas it enhances the mean rainfall during the mature and decay phases.The elimination of the radiative effects of water clouds weakens the mean rainfall during the mature phase, but increases the mean rainfall during the decay phase (Shen et al. 2011a).Although Shen et al. (2011a) examined the radiative effects of water clouds on rainfall through the analysis of vertical structures of heat budget, they did not provide explanations of why and how the radiative processes of water clouds affect hydrometeor change/convergence, which may be an important rainfall generating process. In this study, the radiative effects of water clouds on heat, cloud microphysical and surface rainfall budgets associated with pre-summer torrential rainfall event are investigated by revisiting the sensitivity simulation data from Shen et al. (2011a).Five-day mean analysis is conducted to examine the radiative effects of water clouds on a pre-summer heavy rainfall event during 3 -8 June 2008.Since ice clouds may reduce the incoming solar radiative flux that reaches water clouds and the outgoing infrared radiative flux emitted from water clouds and surface, the radiative processes of ice clouds may alter the radiative effects of water clouds on rainfall.Thus, the radiative effects of water clouds on heat, cloud microphysical and surface rainfall budgets will be respectively discussed in the presence and absence of the radiative effects of ice clouds.Section 2 briefly describes model, large-scale forcing, and sensitivity experiments.Section 3 discusses the analysis, and section 4 summarizes the findings. MODEL AND EXPERIMENTS The data used in this study come from Shen et al. (2011a), which are obtained from the four sensitivity experiments with a two-dimensional cloud-resolving model.Model setups used in this study are summarized in Table 1, and model microphysical schemes can be seen in Table 2.A longitudinally oriented rectangular area of 108 -116°E, 21 -22°N, within which torrential rainfall occurred, is chosen as the model domain for the calculation of large-scale forcing Fig. 1.The model is integrated from 0200 Local Standard Time (LST) 3 June to 0200 LST 8 June 2008, during the pre-summer heavy rainfall event.Model simulations in the control experiment (CTL) are compared with available observations in terms of rain rate, and temperature and water vapor profiles (Wang et al. 2010;Shen et al. 2011b), which show good agreement.The three sensitivity experiments are identical to the control experiment, except that the mixing ratios of water clouds, ice clouds, and both water and ice clouds are set to zero in the calculations of radiation, in the sensitivity experiments NWR, NIR, and NCR respectively.The comparison between CTL and NWR is conducted to study the radiative effects of water clouds on rainfall in the presence of radiative effects of ice clouds.NIR and NCR are compared to examine the radiative effects of water clouds on rainfall in the absence of radiative effects of ice clouds. Radiative Effects of Water Clouds in the Presence of Radiative Effects of Ice Clouds In the presence of radiative effects of ice clouds, the exclusion of radiative effects of water clouds reduces the mean rain rate from 1.36 mm h -1 in CTL to 1.32 mm h -1 in NWR (Table 3).The mass-integrated cloud budget shows that the rain rate (P S ) is associated with the net condensation (Q NC ) and hydrometeor change/convergence (Q CM ), i.e., q 5 = q c + q r + q i + q s + q g , q c , q r , q i , q s , and q g are the mixing ratios of cloud water, raindrops, cloud ice, snow, and graupel, respectively; , z t and z b are the heights of the top and bottom of the model atmosphere respectively; Prognostic equations Potential temperature, specific humidity, five hydrometeor species, and perturbation zonal wind and vertical velocity. Basic model parameters Model domain of 768 km, grid mash of 1.5 km, time step of 12 s, and 33 vertical layers. Lateral boundary conditions Cyclic Model reference Gao and Li (2008) and Li and Gao (2011) Model integration 0200 LST 3 June -0200 LST 8 June 2008 Large-scale forcing Vertical velocity and zonal wind (see Fig. 1), and horizontal advections (not shown). Model surface boundary conditions The surface temperature and specific humidity from NCEP/GDAS are also imposed in the model to calculate surface sensible heat flux and evaporation flux.P GFR Growth of graupel by the freezing of raindrops LFO Table 2. List of microphysical processes and their parameterization schemes [Lin et al. (1983, LFO), Rutledge and Hobbs (1983;1984;RH83, RH84), Tao et al. (1989, TSM), and Krueger et al. (1995, KFLC)]. (a) (b) Note that the model domain mean hydrometeor convergence is zero due to the cyclic lateral boundaries furnished in the model.Since the radiative effects of ice clouds are the term in the heat budget, the mean heat budget can be analyzed.Following Li et al. (1999), the model domain mean heat and specific humidity budgets can be expressed by Here, ( / ) ; R is the gas constant; c p is the specific heat of dry air at constant pressure p, and p o = 1000 mb; t is height-dependent air density; T is the air temperature, θ is the potential temperature; u and w are the zonal and vertical components of winds; Q cn is the latent heat due to the phase change between water vapor and five cloud species; Q R is the radiative heating rate due to convergence of the net flux of solar and infrared radiative fluxes; overbar denotes model domain mean, and prime is perturbation from the domain mean; subscript "o" is a value imposed on the model, which is constructed from National Centers for Environmental Prediction/Global Data Assimilation System (NCEP/GDAS) The heat budget (2) states that the local change of model domain mean temperature is determined by condensational heating, radiative heating, convergence of vertical heat flux, vertical temperature advection, and imposed horizontal temperature advection. The removal of the radiative effects of water clouds weakens the mean infrared radiative cooling from CTL to NWR from altitudes of 2 to 13 km (Fig. 2b).The reduced mean latent heat between altitudes of 2.5 and 6.5 km corresponds to the weakened mean infrared radiative cooling (Fig. 2a).The suppressed mean latent heat tends to cool the local atmosphere at altitudes between 2.5 and 4.5 km, which causes an unstable vertical stratification near the surface.In addition, the reduced net condensation associated with the decreased mean latent heat decreases the consumption of water vapor, which is available for the enhancement of convection near the surface.As a result, the mean net condensation and associated mean latent heat increase near the surface.The enhanced mean latent heat near the surface is largely canceled by the weakened mean latent heat in the mid troposphere, but the magnitude of the former is larger than that of the latter, which leads to a slight increase in the mean net condensation from CTL to NWR (Table 3). To examine the difference in Q CM between CTL and NWR, the mean cloud microphysical budgets are analyzed. The comparison in the mean cloud microphysical budget between CTL and NWR reveals similar Q CMR (-0.01 mm h -1 ) in the two experiments (Fig. 3).The difference in Q CM is associated with the differences in Q CMC (0.0 in CTL versus -0.02 mm h -1 in NWR), Q CMI (0.02 mm h -1 in CTL versus 0.0 in NWR), and Q CMG (0.02 mm h -1 in CTL versus 0.0 in NWR).The vapor condensation rate (P CND ) is larger in NWR than in CTL forming a source for cloud water in NWR, while the collection rate of cloud water by rain (P RACW ) and the accretion rate of cloud water by graupel [P GACW (T < T 0 )] are larger in CTL than in NWR as a result of the melting of cloud ice to cloud water (P IMLT ).Compared to those in CTL, more vapor condensation in NWR may be related to more water vapor due to less consumption of water vapor as indicated by less rainfall.Compared to those in NWR, the melting of cloud ice to cloud water is the microphysical process that is responsible for the sink for cloud ice in CTL.The melting of graupel to rain (P GMLT ) is larger in CTL than in NWR, which leads to a sink for graupel in CTL.The melting of cloud ice to cloud water in CTL and the enhancement in the melting of graupel to rain from NWR to CTL may be associated with the reduced atmospheric cooling from 2.5 to 4.5 km, which corresponds to the increased convergence of vertical heat flux from NWR to CTL at altitudes around 4 km and the suppressed heat divergence and enhanced latent heat NWR to CTL between 2.5 and 4 km (Fig. 2a). The reduction in mean rain rate caused by the exclusion of the radiative effects of water clouds can also be examined by analyzing surface rainfall budget.In the surface rainfall budget (Gao et al. 2005;Cui and Li 2006), the rain rate is associated with local water-vapor change (Q WVT ), water vapor convergence (Q WVF ), surface evaporation (Q WVE ), and hydrometeor change/convergence (Q CM ), i.e., where q v is the specific humidity; u and w are the zonal and vertical components of wind, respectively; E S is the surface evaporation rate; overbar denotes a spatial mean; prime is a perturbation from spatial mean, and the superscript ° defines imposed NCEP/GDAS data.Shen et al. (2010) analyzed grid-scale simulation data of tropical rainfall by partitioning them into eight rainfall types based on the surface rainfall budget.Shen et al. (2012) applied this rainfall separation scheme to the evaluation of an existing convective-stratiform rainfall separation scheme based on the rain intensity (e.g., Tao et al. 1993) and found that the convective rainfall contains a considerable amount of rainfall associated with water vapor divergence.Since convective and stratiform rainfalls are associated with water vapor convergence and divergence, respectively, we separated the grid-scale rainfall simulation data into rain types associated with water vapor convergence and divergence.Table 4 reveals that the reduction in the mean rain rate from CTL to NWR results from the weakened rain rate associated with water vapor divergence, which corresponds to the slowdown in hydrometeor loss/convergence while the reduced water vapor divergence decreases local atmospheric drying. Radiative Effects of Water Clouds in the Absence of Radiative Effects of Ice Clouds In the absence of the radiative effects of ice clouds, exclusion of the radiative effects of water clouds increases the mean rain rate through the increased mean net condensation from NIR to NCR and the mean hydrometeor loss in NCR (Table 3).A comparison of the mean heat budget between the two experiments shows that the removal of radiative effects of water clouds weakens the mean infrared radiative cooling and the mean solar radiative heating at altitudes above 4.5 and 3 km respectively, but enhances the mean infrared radiative cooling and increases the mean solar radiative heating at altitudes below 4 and 3 km, respectively (Fig. 4).As a result, the suppressed mean latent heat above 3.5 km altitude corresponds to the weakened mean infrared radiative cooling, whereas the enhanced mean latent heat below 3.5 km corresponds to the strengthened mean infrared radiative cooling (Fig. 4a).The weakened mean latent heat in the mid and upper troposphere is largely offset by the intensified mean latent heat in the lower troposphere, which causes a slight increase in the net condensation in a mass integration from NIR to NCR.The analysis of cloud microphysical budgets shown in Fig. 5 reveals that the difference in Q CM is associated with the differences in Q CMR (0.01 mm h -1 in NIR versus 0.02 mm h -1 in NCR), Q CMS (-0.01 mm h -1 in NIR versus 0.0 in NCR), and Q CMG (0.01 mm h -1 in NIR versus 0.0 in NCR) (Fig. 5).Q CMR is slightly larger in NCR than NIR because the melting rate of graupel to rain is slightly lower in NCR than NIR.The reduction in the melting from NIR to NCR may be associated with enhanced local atmospheric cooling above 4 km (Fig. 4a), which is caused by the suppressed mean latent heat and weakened convergence of vertical heat flux.Q CMS + Q CMG = 0 in NIR mainly due to the fact the accretion rate of snow by graupel (P GACS ) is slightly lower in NIR than NCR. The enhanced rainfall from NIR to NCR is caused by the increased rainfall associated with water vapor divergence, which is associated with the strengthened hydrome-teor loss/convergence while the intensified water vapor divergence increases the local atmospheric drying. SUMMARY The radiative effects of water clouds on heat, cloud microphysical and surface rainfall budgets during pre-summer rainfall over southern China were examined through the analysis of sensitivity experiment data for a heavy rainfall event that occurred from 3 -8 June 2008.The control experiment and sensitivity experiment without the radiative effects of water clouds were compared to study the rainfall responses to the radiative effects of water clouds when the radiative effects of ice clouds were turned on.The sensitivity experiments without the radiative effects of ice clouds and cloud-radiation interaction were also compared to study the rainfall responses to water cloud-radiation interaction when the radiative effects of ice clouds were turned off.The main conclusions are: Fig. 2 . Fig. 2. Vertical profiles of differences between NWR and CTL (NWR-CTL) for (a) local temperature changes (black), latent heat (red), convergence of vertical heat flux (green), vertical temperature advection (blue), and radiation (orange), and (b) radiation (orange) and its solar heating (red) and infrared cooling (blue) components averaged for 5 days and model domain.Unit is °C d -1 . Fig. 3 . Fig. 3. Time mean cloud microphysical budgets in (a) CTL and (b) NWR.Units for cloud hydrometeors and conversions are mm and mm h -1 , respectively.Cloud microphysical conversion terms and their schemes can be found in Table2.T 0 = 0°C. Fig. 4. Vertical profiles of differences between NCR and NIR (NCR-NIR) for (a) local temperature changes (black), latent heat (red), convergence of vertical heat flux (green), vertical temperature advection (blue), and radiation (orange), and (b) radiation (orange) and its solar heating (red) and infrared cooling (blue) components averaged for 5 days and model domain.Unit is °C d -1 . GACS Growth of graupel by the accretion of snow RH84P GACW Growth of graupel by the accretion of cloud water RH84P WACS Growth of graupel by the riming of snow RH84P GDEP Growth of graupel by the deposition of vapor RH84 CND ] + [P DEP ] + [P SDEP ] +[P GDEP ]) represents the cloud source term that consists of vapor condensation rate for the growth of cloud water ([P CND ]), vapor deposition rates for the growth of cloud ice ([P DEP ]), snow ([P SDEP ]) and graupel ([P GDEP ]); -([P REVP ] + [P MLTG ] + [P MLTS ]) denotes the cloud sink term that includes growth of vapor by evaporation of raindrops ([P ([P REVP ]), evaporation of liquid from graupel surfaces ([P MLTG ]), and evaporation of melting snow ([P MLTS ]).Comparison of the mean cloud microphysical budget between CTL and NWR reveals that the weakened mean rain rate from CTL to NWR is associated with the mean hydrometeor change from a loss in CTL to a gain in NWR while the mean net condensation increases from CTL to NWR. Table 3 . Cloud microphysical budgets (P S , Q NC , and Q CM ) averaged for 5 days over the model domain in CTL, NWR, NIR, and NCR, and their differences between NWR and CTL (NWR-CTL) and NCR and NIR (NCR-NIR).Unit is mm h -1 .
4,244.8
2014-02-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Detecting Linkedin Spammers and its Spam Nets Spam is one of the main problems of the WWW. Many studies exist about characterising and detecting several types of Spam (mainly Web Spam, Email Spam, Forum/Blob Spam and Social Networking Spam). Nevertheless, to the best of our knowledge, there are no studies about the detection of Spam in Linkedin. In this article, we propose a method for detecting Spammers and Spam nets in the Linkedin social network. As there are no public or private Linkedin datasets in the state of the art, we have manually built a dataset of real Linkedin users, classifying them as Spammers or legitimate users. The proposed method for detecting Linkedin Spammers consists of a set of new heuristics and their combinations using a kNN classifier. Moreover, we proposed a method for detecting Spam nets (fake companies) in Linkedin, based on the idea that the profiles of these companies share content similarities. We have found that the proposed methods were very effective. We achieved an F-Measure of 0.971 and an AUC close to 1 in the detection of Spammer profiles, and in the detection of Spam nets, we have obtained an F-Measure of 1. I. INTRODUCTION Currently, the WWW is the biggest information repository ever built, and it is continuously growing.According to the study presented by Gulli and Signorini [1] in 2005, the Web consists of thousands of millions of pages.In 2008, according to Official Blog of Google 1 , the Web contained 1 trillion unique URLs. Due to the huge size of the Web, search engines are essential tools in order to allow users to access relevant information for their needs.Search engines are complex systems that allow collecting, storing, managing, locating and accessing web resources ranked according to user preferences.A study by Jansen and Spink [2] established that approximately 80% of search engine users do not take into consideration those entries that are placed beyond the third result page.This fact, together with the great amount of money that the traffic of a web site can generate, has led to the appearance of persons and organizations that use unethical techniques to try to improve the ranking of their pages and web sites.Persons and organizations that use these methods are called spammers, and the set of techniques used by them, are called Spam techniques. There are different types of Spam based on the target client: Web Spam [3] [4] or Email Spam [5].Web Spam contains several Spam types such as: Blog/Forum Spam, Review/Opinion Spam and Social Networking Spam.Blog/Forum Spam is the Spam created by posting automatically random comments or promoting commercial services to blogs, wikis, guestbooks.Review/Opinion Spam tries to mislead readers or 1 http://googleblog.blogspot.com.es/2008/07/we-knew-web-was-big.htmlautomated opinion mining and sentiment analysis systems by giving undeserving positive opinions to some target entities in order to promote them and/or by giving false negative opinions to some other entities in order to damage their reputations.Finally, Spam is also becoming a problem in social networks.There are existing studies in the literature about Spam in Video Social Networks [6] or Twitter [7] [8].There are several features of Social Networks that could make Spam even more attractive: • The target client is directly the final user.Web Spam is focused on content, so the Web Spammers try to improve the relevance of a web site by, for example, keyword stuffing.When the user conducts a search, it is likely that a Web Spam page will appear.However, it depends on the user clicking on this Web Spam page.Social Networks allow direct Spam, therefore the user will receive the Spam no matter what. • It is focused on specific user profiles.In the case of Email Spam the content and the products of the email are generic because Spammers do not have information about target users.However, Social Networks (Facebook [9], Twitter [10] or Linkedin2 ) allow us to know a great amount of user data, so spammers use this data to aim each type of content or product at a specific audience. • Social networks contain social network search tools to target a certain demographical segment of users. This article focuses on Spam in the Linkedin social network.Linkedin is a social networking web site for people in professional occupations.It was founded in 2002 and in 2013, Linkedin had more than 200 million registered users in more than 200 countries. Although some existing works have been performed to detect Spam in some well-known social networks, to the best of our knowledge this is the first one focused on the Linkedin social network and it presents a different approach to detect Spam in Social Networks.First, due to the lack of public or private Linkedin Spam datasets in the state of the art, we have generated one by means of a honeypot profile and searches of the Spam phrases.The process used to create the dataset is explained in Section V. Second, we have created a method for detecting Linkedin Spammers.For that, we have analysed the Spammers profiles on Linkedin, and we have proposed a set of new heuristics to characterise them.Finally, we have studied the combination of these heuristics using different types of classifiers (Naïve Bayes [11], SVM [12], Decision Trees [13] and kNN [14]).www.ijacsa.thesai.orgThird, we present a method for detecting Spam Linkedin nets, that is, to detect sets of fake users created to send Spam messages to the real users connecting with them.This allows filtering those legitimate companies among fake companies, which creates a large amount of profiles for the unique purpose of generating Spam.The method is based on the similarity of their profiles and contacts.For that, the method uses distance functions (Levenshtein [15], Jaro-Winkler [16], Jaccard [17], etc.) which calculate the similarity value of each company. After performing the experiments, we have determined that for the Spammers detection method the best classifier is kNN, and for the Spam nets detection method the best distance function is Levenshtein. In short, these are the main contributions of this article: a) a detection method for Linkedin Spammers, b) a detection method for Spam nets and c) the first Linkedin Spam dataset. The structure of this article is as follows.In Section II we comment on the works presented in the literature regarding the different types of Spam, and Spam techniques, as well as the ones that deal with the distinct detection methods.Section III shows the presence of Spam in social networks, specifically in Linkedin.Section IV explains the two proposed detection methods.In Section V the Linkedin Spam dataset we have created is explained.Section VI analyses the results obtained detecting Spammers profiles and Spam nets by applying the proposed methods.Finally, in sections VII and VIII we comment on our conclusions and the future works respectively. II. RELATED WORK Spam has existed since the Web appeared and it has been growing in importance with the expansion of the Web.Currently, Spam is present in various applications, such as email servers, blogs, search engines, videos, opinions, social networks, etc. Different approaches to Spam detection have appeared [18] [19], however, the best results have been obtained by the methods based on the machine learning approach.Below, we analyse some of the more important articles for the different types of Spam. There are many articles about Web Spam.Henzinger et al. [4] discuss the importance of this phenomenon and the quality of the results that search engines offer.Gyöngyi and Garcia-Molina [3] propose a taxonomy of this type of Spam.Ntoulas et al. [20] highlight the importance of analysing the content to detect this type of Spam. On the other hand, there are studies focused on the detection of Email Spam.Among them, we highlight the work performed by Ching-Tung et al. [21].This article presents a new approach for detecting Email Spam based on visual analysis, due to Spam emails embedding text messages in images to get around text-based anti-spam filters.They use three sets of features: a) embedded-text features (text embedded in the images), b) banner and graphic features (ratio of the number of banner images and ratio of the number of graphic images) and c) image location features.One of the first studies which focused on the detection of Email Spam based on machine learning, was that proposed by Sahami et al. [5]. Currently, Spam in social networks is booming, due to the wide use and the easy access to user data.There are several articles focused on this type of Spam. Gao et al. [22] present an initial study to quantify and characterize Spam campaigns launched using accounts on online social networks.They analyze 3.5 million Facebook users, and propose a set of automated techniques to detect and characterize coordinated Spam campaigns.Grier et al. [23] present a characterization of Spam on Twitter.The authors indicate that 8% of 25 million URLs studied point to phishing, malware, and scams listed on popular blacklists.However their results indicate that blacklists are too slow at identifying new threats.In 2010, Wang presented an article [24], where he proposed a Spam detection prototype based on content and graph features.Another interesting articles focused on Twitter Spam, are the ones carried out by Yardi et al. [25] and Stringhini et al. [26], which study the behavior of Twitter Spammers finding that they exhibit different behavior (tweets, replying tweets, followers, and followees) from normal users (non-spammers). With respect to the forum/blog Spam, it is necessary to highlight the study performed by Youngsang et al. [27].In this work, the authors study the importance of forum Spam, and the detection of these web pages by using several new heuristics and an SVM classifier.Another study presented by Mishne [28], describes an approach for detecting blog Spam by comparing the language models used in different posts. In the literature there are other articles related to other types of Spam.An example is the article by Jindal and Liu [29], where they present a detailed analysis about the Spam in the context of product reviews.Mukherjee et al. [30] presented an article focusing on detecting fake reviews.The authors propose an effective technique to detect such groups, using the following features: ratio of group size, group size, support count, time window between fake reviews, etc. Lim et al. [31] presented another interesting study about this type of Spam.The authors propose a supervised method to discover review Spammers.To achieve that, they identify several characteristic behaviors of review spammers and model these behaviors to perform a ranking of the different reviewers. Finally, we want to highlight an interesting article performed by Benevenuto et al. [6], where the authors propose a method for detecting Spam in video social networks.They use three sets of features: a) quality of the set of videos uploaded by the user, b) individual characteristics of user behavior and c) social relationships established between users via video response interactions. In this Section, we have presented a wide set of articles related with the different types of Spam and detection methods.Among them are several focused on Social Networking Spam, however, to the best of our knowledge, this is the first study that analyses and detects Spam in Linkedin.Due to significant differences with other social networks, it is necessary to analyse in detail its characteristics and propose new heuristics to detect Spammers and their Spam nets.Some of its major differences are: it is a professional network, premium accounts allow access to detailed user data or users can be filtered (using search tools) to conduct Spam campaigns.Moreover, its interesting and different characteristics, from the Spammers point of view, make this study both useful and necessary.www.ijacsa.thesai.orgSpam is one of the most important challenges on the Web.Currently, due to the boom in social networks, the Spam generated in them is growing constantly.Probably, one of the main reasons for this growth is the large amount of users that they contain.In Table I we show the number of users for: Google+ 3 , Facebook 4 , Linkedin 5 and Twitter 6 . There are several reasons why Spammers are using the social networks.These reasons can be divided into 3 topics: • Audience: • Huge audience (see Table I), this means big profits for spammers, even if only a small percent visit the page or buy the product.• It is very easy to create Spam, because social networks allow direct Spam, that is, the Spammer knows the name and data (job, contacts, skills, etc.) of each of its victims.That is the difference with Web Spam or Email Spam where the Spammer does not know these data about its victims, only the email and perhaps the name. • Different options and tools to create Spam: • Fast distribution of Spam, due to user trust and their curiosity.The users trust anything that they see posted by one of their contacts.An example of this, is the use of popular hashtags in Twitter to lure users to their Spam sites.• This type of Spam allows the creation of fake relationship contacts, to make the user's profile appear more real on the social network.• Spammers can send messages to the users and include embedded links to pornographic or other product sites designed to sell something.• Internet social networks contain common fan pages or groups that allow people to send messages to a lot of users even if the Spam user does not have these users as contacts. • Little investment.Unlike other types of Web Spam, which requires investing in domains, hosting, developers, etc., social network Spam only needs accounts in social networks such as Linkedin, Facebook or Twitter. A. How much Spam is there in Linkedin? One approach to determine the incidence of one problem or topic in society is to measure its impact in Google searches (or other important search engines).This method has already been used to study the presence of certain diseases in society [34].In our case, we have used two methods.First, we have measured the trends in the Google searches using Google Trends.Figure 1 shows the obtained results.It depicts the relative amount of searches by year according to the maximum value obtained in 2012 (100%).With the same tool, we have also analyzed the importance of Web Spam versus Linkedin Spam.The results shown in Figure 2, show that the importance of Web Spam (the most important type of Spam) is decreasing compared to the increase in Linkedin Spam. On the other hand, we have used another approach: searching for the query in Google Linkein Spam.The number of results is higher than 53 million pages, which indicates the high presence and concern about social networking Spam. B. Can existing Spam detection techniques be used on Linkedin? Before analysing the existing Spam detection techniques we will explain the different types of Spam.There are different classifications for them, but, the classifications most relevant in the state of the art, performed by Gyongyi and Garcia-Molina [3], and Najork [35], suggest that the main types of Web Spam are: content spam, cloaking and redirection spam, click Spam and link spam. • Content Spam, that is, a technique based on the modification of the content or the keywords of a web page with the purpose of simulating more relevance to search engines and attract more traffic.To detect this type of Spam the search engines use several algorithms and heuristics based on content analysis.However, there are a lot of Spam pages that avoid these detection algorithms.An example of these heuristics was presented by Ntoulas et al. [20]. • Cloaking and Redirection Spam, which consists in dynamically generating different content for certain clients (e.g.: browsers) but not for others (e.g.: crawling systems).There are several techniques to detect this type of Spam, among them we can highlight [36], [37] and [38].The latter approach, proposed by Wu and Davison [38], where the authors detect this Spam by analysing common words across three copies of a web page. • Click Spam: this technique is based on running queries against search engines and clicking on certain pages in order to simulate a real interest from the user.Search engines use algorithms to analyze certain logs clicks and detect suspicious behaviours. • Link Spam, which is the creation of Web Spam by means of the addition of links between pages with the purpose of raising their popularity.It is also possible to create "link farms", which are pages and sites interconnected among themselves with the same purpose.In order to detect Link Spam, the search engines analyse relationships and the graphs between web domains.In this way, they can detect domains with a number of inlinks and outlinks or web graphs suspected of being Spam. Finally, there is another detection technique which is not focused on a unique type of Spam.It was proposed by Webb et al. [39] [40], and the authors analyse the HTTP headers and their common values in Spam pages, to detect Spam. As we can see, the existing detection techniques cannot be applied to detect Liknedin Spam.Only the algorithms for detecting Content Spam could be used, however, the problem is that the existing heuristics to detect Spam in a typical web page, cannot be applied to web page profiles of Linkedin because their features are completely different.Due to existing detection techniques being impossible to apply, Linkedin has proposed its own particular detection techniques (see Section III-D). C. How do Spammers create Linkedin Spam and what are the differences between it and other social networks? The first step for a Spammer is to decide whether the Spamming attack will be a focused or general attack.In the case of it being a focused attack, the Spammer will search for specific users by means of the Linkedin tools.After that, the Spammer can create Spam by the following methods: • Messages: these are sent by any one of our contacts. However, we often accept contacts because of having contacts in common, or because we think that the job, groups or skills of this user are appropriate for them to be our contact, and perhaps, it could be a job opportunity. • Groups: that is, those notifications sent to the groups of each of the victims.These messages will be sent by email to the users of the group, and moreover, this post can be seen in the forum of the group. • Updates: the Spammer makes updates to their profile to invite their contacts to visit their profile. After we have analysed the operation of Linkedin Spam, we explain the differences in the creation Spam between Linkedin and other social networks. • Linkedin allows the use of social networks search tools to target a certain demographical segment of the users.This allows the Spam to be made more specific, and therefore, it is likely that the victim will click on the Spam link. • Linkedin is a social network which focuses on business, companies and professionals, which is very interesting from the point of view of the Spammers.In other words, the possible profit will be higher if the person that visits the Spam site is a businessman instead of a teenager from Facebook.Twitter, Facebook or email can be used for professional ends, however they do not contain the other advantages of Linkedin. • Linkedin allows direct Spam, such as Twitter, Facebook or email, but in this case, the Spammer knows the name and data (job, contacts, skills, location, etc.) of each of their victims.So, the probability of success is higher in Linkedin than in Web or Email Spam. • Due to Linkedin being defined as a professional social network, usually the data of its users are real, and the usage of it by them is usually a way to find a job or to find new professional contacts; this is not a game.For this reason, when a Linkedin user receives a message, email or update from Linkedin, they pay more attention to it than to notifications of other social networks.Again, the probability of Spam success is higher. in a particular email, message or comment, which is more difficult.So, if we can detect a Spammer, we can also detect all their Spam messages (emails, comments and updates). D. How does Linkedin detect its Spam? Currently, Linkedin uses two techniques to detect Spam.On the one hand, when a user receives an invitation to become a contact of another user, he can indicate that this person is a Spammer.A Linkedin user has numerous methods of contacting a specific user (as a friend, as a coworker, as a classmate, etc.).Linkedin blocks a contact method to a user profile (Spammer), when it has received 5 requests rejecting said account by the same contact method indicating that it is Spam.Alternatively, the user can report the profile of the Spammer to the following e-mail address<EMAIL_ADDRESS>However, from our opinion, these methods are not sufficient, due to two reasons.First, people are lazy, and because of that they will usually not accept this person but also will not usually notify that the user is a Spammer.For the same reason, only in a few cases, the user sends an email to report a Spammer.The second reason is because of the speed and ease with which criminal organizations and Spammers can create a lot of accounts, compared to the slow detection methods used by Linkedin. In summary, as we have explained, the presence and concern of Spam in social networks is high.Due to this, together with the lack of articles about Linkedin Spam, existing methods for detecting Spam cannot be applied and the need for other methods to complement tools used by Linkedin, we propose a method for detecting Linkedin Spammers, and a method for identifying fake companies (Spam nets). IV. DETECTION METHODS We present two detection methods, one for detecting Spammers and another for detecting Spam nets, both in the Linkedin social network.In order to know and understand the behaviour of Linkedin Spammers and Spam nets, and also to test the proposed methods, we have manually built a dataset of Linkedin profiles, classifying them as spammers and legitimate users (see Section V). The method for detecting Spammers (section IV-A) is based on a set of new heuristics together with the use of machine learning.The heuristics have been obtained by means of the manual and statistical analysis of the legitimate and Spam Linkedin profiles.So, we characterize a Linkedin profile, and then decide whether or not it is Spam.For the appropriate combination of these heuristics, we have tried different classification techniques (decision trees, techniques based on rules, neuronal networks and kNN). To detect Spams nets in Linkedin (section IV-B), we have focused on the idea that we have observed during the manual analysis.The fake profiles of a fake company, usually share similarities that allow differentiation between legitimate companies and fake companies. A. Method to detect Linkedin Spammers We will discuss a set of heuristics that aim to characterize and detect Spam profiles in Linkedin.Some of the features we present below, have appeared because Linkedin is a professional social network, and their users are very careful with the details of their profiles.Linkedin users want to have an updated and complete profile. The results obtained for each heuristic were tested on the dataset described in Section V.For each non binary feature, we include a figure showing a box and whisker diagram with the feature values, corresponding to Spam and Non-Spam pages.For binary features, we only present the percentage of use for each type of page.The features analysed are the following: • Number of words in profile: we have analysed this feature because during the manual labeling of the pages we have observed, that Spam pages usually contain less words than the legitimate Linkedin profiles.Figure 3a shows that the median number of words in Spam profiles is 454 words.In other words, the Spam profiles contain on average 559.3 words and the No-Spam profiles 741.8 words, 24.6% lower. • Number of contacts: due to the automatic generation of Spam profiles, they present two clear behaviors: profiles with very few contacts or profiles with many contacts.On the other hand, the legitimate profiles follow a uniform distribution of contacts, without these extreme differences.We observe in Figure 3b that in Spam profiles the median is 1 and in the No-Spam profiles it is 81.Moreover, the difference between averages is very significant.Specifically, Spam profiles contain on average 204.8 contacts, while No-Spam profiles contain only 98.1. • Name size: in this case, the deficiency of Spam profiles appears in the name of the person.Fake profiles usually contain shorter names and surnames than in legitimate profiles.This is because Linkedin users are very careful and want their data profile to be correct and updated.To achieve this, they use their complete name and do not tend to use short names or nicknames. As we thought, the results indicate that legitimate profiles have longer names than fake profiles.Figure 3c shows that the median and average in Spam profiles is 10 and 9.09 letters and in No-Spam profiles is 18 and 17.41, respectively. • Location size: we have observed that Spam profiles usually contain a simple and smaller location than in the legitimate profiles.Moreover, the location among the fake profiles of the fake companies are very similar. In Figure 3d shows the median and average location size in Spam profiles to be 9 letters and 14.72, respectively.In the case of No-Spam profiles, the median is three times higher, than the Spam profiles, 28 letters, and the average is 24.64 letters.• Name written in lowercase: another big weakness of fake profiles, is that their name or surname, are often written in lowercase.Figure 4a indicates that more than 20% of the Spam profiles contain user names in lowercase, and in the legitimate profiles this value is almost 10 times smaller. • Rhythmic name: a technique used to draw the users attention and build trust in the profile.Specifically, it was observed that often the Spam profiles contained people whose first and last names start with the same two or three letters.Figure 4b shows that the 5.12% of the No-Spam profiles contain a rhythmic name and lastname, but this figure raises up to more than five times, 27.90, in the Spam profiles. • Profile with photo: we have noted two weaknesses in the fake Linkedin profiles regarding this issue.First, in a social network the users usually have photo, in the case of the Spam profile, they usually do not.And second, if a Spam profile contains a photo, this photo can usually be found by a search engine.In Figure 4c we observe that only 22.67% of the Spam profiles contain photo, and in the case of legitimate profiles this value is more than double, specifically 53.84%. • Plagiarism in profiles: another weakness of automatically generated Spam profiles, is that their content is small, or, due to the difficulty in generating logical content, the Spammers take texts from the Internet. We have used the Grammarly plagiarism checker 8 .This system finds unoriginal text by checking for plagiarism against a database of over 8 billion documents. The Figure 4d shows that multiple Spam profiles have copied or automatically generated content.The differences among the results are very significant, specifically 3.45% in No-Spam profiles and 56.53% in Spam profiles. As we have seen, Spam profiles tend to be simpler and contain less detail than legitimate profiles.Moreover, the results obtained for each type of profile (Spam and No-Spam) show significant differences between them.In multiples cases the results are 2, 3, 5 or even 10 times higher or smaller in Spam profiles than in legitimate profiles.These important differences allow the proposed heuristics to be used to characterize and detect Linkedin Spam. The detection method proposed uses these heuristics together with machine learning techniques to identify Spammer profiles.The method is to not focus on a specific heuristic but to use all of them.In the case of failure of a particular heuristic, since the method uses all heuristics, the other heuristics will correct this error.For the appropriate combination of heuristics we have tried different machine learning techniques (decision trees, kNN, SVM and Naive-Bayes).Based on the obtained results (see Section VI), the method for combining the heuristics is kNN. B. How to Detect Spam Nets? We have studied a method for detecting fake companies, companies whose unique purpose is to create fake profiles to generate Spam.The proposed method identifies this type of Linkedin companies based on the similarity among the profiles that each company contains. To measure the similarity between profiles, we have generated a text string that contains the different data of the profile separated by commas ",".A generic example of the text string generated with the Linkedin data profile and the results obtained with the proposed heuristics, is the following: The value of the user name, title of the profile, user location, number of contacts, skills and education are extracted directly from the profile of the user.However, the variables: NumberOfWords, NameSize, LocationSize, RhythmicName, Photo, LowercaseName and ProfilePlagarism are calculated previously, based on the analysis of the profile.Finally, the value of RhythmicName, Photo, LowercaseName and ProfilePlagarism are boolean. As a preliminary step, we specify the following concepts to help us formally define our method: • Let be pi = P rof ile of the user i. • Let be N p = N umber of prof iles of a company. • Let be distance ij = Similarity between the prof iles i and j. To obtain this value, we have studied different functions: • Levenshtein [15]: is the minimum number of edits needed to transform one string into the other (using insert, delete or replace operations).This distance is usually denoted as edit distance.• Jaro: is a similarity function which defines the transposition of two characters as the only permitted operation to edit.The characters can be a distance apart depending on the length of both text strings.• Jaro-Winkler [16]: is a variant of the Jaro metric, which assigns similarity scores higher to those words that share some prefix.• Jaccard [17]: it defines the similarity between two text strings A and B as the size of the intersection divided by the size of the union of the corresponding text strings.• Cos TF-IDF [41]: given two strings A and B, and, α 1 , α 2 ... α K and β 1 , β 2 ... β their tokens respectively, they can be seen as two vectors, V A and V B , with K and L components.So, the similarity between A and B, can be calculated as the cosine of the angle of these two vectors.• Monge Elkan [42]: given two strings A and B, and, α 1 , α 2 ... α K and β 1 , β 2 ... β L their tokens respectively.For each token α i there is a β j with maximum similarity.Then the Monge Elkan similarity between A and B, is the average maximum similarity between a couple (α i , β j ) • Let be S pi the similarity of a profile, pi, with the other profiles of his company.This value is calculated as the sum of the distances between the text string of the corresponding profile with the text strings of the rest of profiles, divided by the number of profiles, N p , minus 1. We now can define the method we propose to obtain the value, S c , that summarises the similarity of a specific company. S c is calculated as the average of the similarities of the profiles, S pi , of the staff of the company. After we have obtained the value of similarity of a company, we have to decide if said company is fake or legitimate.In order to do that, we have calculated a threshold for each similarity function.These thresholds allow us to decide when a company contains very similar profiles, and this company will likely be fake, or conversely, the profiles are different enough to be a legitimate company.For that, we have created a training set that contains 4 fake and 4 legitimate companies with the highest number of profiles.In Table II we show the similarity results obtained in this training set.Among the results obtained, we have selected as thresholds those that have obtained the best results (precision, recall and F-Measure) in the training set.The thresholds selected were used to obtain the results (precision, recall and F-Measure) of the method in the full dataset. Analysing the results we can see that there are differences about the similarity values obtained by each technique.Levenshtein and Cos TF-IDF have obtained the lower values of similarity, around 0.6 and 0.5 respectively.In the other hand, Jaccard obtains the highest results, close to 1. Jaro and Monge Eklan have obtained intermediate results, with values between 0.7 and 0.8.The results obtained by Jaro and Jaro-Winkler, as we expected, are different.This is because Jaro-Winkler scores words which share some prefix higher and we had observed that the string created contains prefixes that increase the similarity result. If we study the results of the legitimate companies and the fake companies separately, we can observe that, as we thought, fake companies display more similarity between them than the normal companies.This fact can easily be seen in the results obtained by Levenshtein and Cos TF-IDF measures. In Section VI-C we present the precision, recall and F-Measure applying the proposed method on the created dataset. V. LINKEDIN SPAM DATASET To the best of our knowledge, there is no public dataset of Linkedin profiles.The dataset we have built contains legitimate and Spam profiles.The creation of the dataset was carried out during 30 days, from October 30th to November 30th, 2012. For the legitimate profiles we have used profiles of users in well known companies (Google, Microsoft, Oracle, Twitter, IBM, etc.).The gathering process of the legitimate profiles was made automatically by means of the Linkedin API 9 .We have used our Linkedin profiles to obtain user profiles of the legitimate companies.The process starts in the public profile of an employee of a legitimate company and continues by the contacts of this user who works in the same company. On the other hand, we have identified a set of Spam users, based on the Spam messages that they send to other users, and the profiles obtained by searching for words that TABLE II: Similarity of the analysed fake and legitimate companies commonly appear in Spam comments in Google, such as "viagra", "growth hormone", "cialis", etc.The list of these words has been obtained by searching Spam words in the WordPress Codex 10 , the online manual for WordPress.After this, we created a fake profile on Linkedin, as a honeypot, and sent contact requests to the Spam profiles.All the contact requests were accepted, and usually the Spammers responded 1 or 2 days after the request.Once accepted, as we thought, we could detect new fake profiles among their contacts.The labeling and gathering process of each fake profile was made manually due to the need to check whether each profile was really a fake. We want to clarify that to obtain legitimate profiles we have not associated the created fake profile with the reputed companies.Both legitimate profiles and fake profiles have not been published anywhere and they have been stored encrypted. To know more details about the generated dataset, we explain its structure: • Its size is 1,4 GB. To decide the size of the dataset, and the subsets (Spammers and legitimate users), we have had a problem because there are no studies about how much Spam there is in Linkedin.So, we have decided that the number of Spam profiles is similar to Spam contained in .comdomain [20].Moreover, we have used a percentage of Spam higher than .comdomain, because we want to increase the variability of the profiles in order to make the detection process more difficult. • The profiles are divided among 150 companies, of which 50 are fake companies and 100 are legitimates. • Among the fake companies are companies focused on different drugs like Viagra or Cialis, and on Chinese products. • 80% of the legitimate companies are focused on technology and computer science area. • Companies are mainly located in USA. • The size of the legitimate companies is variable, ranging from companies with 500 employees to those with more than 20,000. The provided data about the used dataset allows other researchers to test and compare our methods.It is likely that 10 http://codex.wordpress.org/SpamWords their dataset does not contain the same profiles as our dataset, however, in our opinion, they can create a dataset with the same structure and features, and therefore, the results should be very similar. VI. EXPERIMENTAL RESULTS In this Section, we discuss all the issues we found for both the execution and the assessment stages, and we show and analyse the results obtained.First, we show the results obtained when detecting Spammers in Linkedin using the proposed heuristics (see Section IV-A), and then, the results obtained to detect Spam groups or companies, analysing the similarity between the company user profiles. A. Experimental Setup To execute the different classifiers, we used WEKA [43], a tool for automatic learning and data mining, which includes different types of classifiers and different algorithms for each classifier.The techniques tested were: SVM, Naïve Bayes, Decision Trees and Nearest Neighbour.To obtain the results, we have used the default parameters in WEKA for each of the machine learning algorithms, specially the value k used in kNN has been 1. To evaluate the classifier we used the "cross validation" technique [44], that consists in building k data subsets.In each iteration a new model is built and assessed, using one of the sets as a "test set" and the rest as "training set".We used 10 as the value for k ("ten-fold cross validation"). The dataset used to obtain the results was created by us, because there is no public dataset of Linkedin profiles.The method used to generate it and its characteristics were explained in Section V. B. Results for Linkedin Spammers This section discusses the results obtained applying the proposed method to discover Linkedin Spammers.Table III shows the precision, recall and F-Measure for each of the types of classifiers studied.The best results are obtained using Levenshtein and Cos TF-IDF.In these two cases the detection is perfect, with an F-Measure of 1.The results indicate that the method used by these two techniques to measure the similarity between profiles, is the most adequate for the text string generated by our method. In the analysis of the results, we have detected two types of Spam profiles.On the one hand we have very simple Spam profiles and with little content, and, on the other hand, complex Spam profiles with too much content (in most of cases, automatically generated).To detect this second type all the proposed heuristics must be used because, otherwise, it could be skipped. However, the fake companies only contain one of these two types of Spam profiles and always with very similar content.In short, the method proposed to detect Spam nets has achieved hopeful results, mainly due to two facts: a) the idea proposed is right, and the fake companies contain similar profiles and b) the Social Networking Spam is relatively new and its techniques are unsophisticated.In the future, it is likely that these techniques will improve.However, we have demonstrated that the proposed idea works perfectly, and can be used, in the future, as a base to be complemented with other new techniques. VII. CONCLUSIONS The presence of different types of Spam (Web Spam, Email Spam, Forum/Blog Spam and Social Networking Spam) on the Web is important and is constantly growing.There are many studies that analyse and present techniques for the detection of different types of Spam.However, to the best of our knowledge, there are no studies about Spam in Linkedin. In this article, we present a method to detect Spammers and Spam nets in Linkedin social network.We have proposed a set of heuristics that characterize Linkedin Spam profiles and help to identify Linkedin Spammers.These heuristics were used as input to several classification algorithms (Naïve Bayes, SVM, Decision Trees, kNN).The best results are obtained by kNN and decision trees, with an F-Measure of 0.969 and 0.967, respectively, and an AUC close to 1.Moreover, we have proposed a method for detecting Spam nets in Linkedin.It is based on the idea that the profiles of fake companies share multiple similarities.The method calculates the similarity between different profiles of the companies, using several distance functions (Levenshtein, Jaro, Smith-Waterman, etc.).The values of similarity obtained are used as thresholds to detect fake companies (Spam nets) among the legitimate companies.Again, the results are also very hopeful.We have achieved an F-Measure of 1 using Levenshtein and Cos TF-IDF. In short, the results obtained in the study show that, on the one hand, the heuristics proposed are adequate to detect Spammer profiles, and, on the other hand, the new method proposed to detect Spam nets (fake companies) in Linkedin performs very well. VIII. FUTURE WORKS Spam in Linkedin social network is a relatively new Spam type, so our intention is to follow its evolution over time.Due to the continuous changing of Spam techniques, we want to find new and better heuristics to detect this type of Spam.Furthermore, we plan to increase and improve our labeled dataset.Finally, we will test the proposed heuristics and the method for detecting Spam Linkedin nets, in other social networks. Fig. 3 : Fig. 3: Number of words (a), contacts (b), name size (c) and location size (d) in the Spam and No-spam profiles Fig. 4 : Fig. 4: Percentages of the names written in lowercase (a), percentage of rhythmic names (b), percentage of profiles with photo (c) and percentages of the plagiarism profiles (d) in the Spam and No-spam profiles TABLE I : Number of users, in millions, for the main social networks III.MOTIVATION TABLE III : Results of the proposed heuristics using different types of classifiers TABLE IV : Results to detect Spam Linkdin nets
9,667
2013-01-01T00:00:00.000
[ "Computer Science" ]
Comment on acp-2021-924 If one takes the excess carbon from fossil fuels in the atmosphere, about 200 Gtons, and divides it by the ocean uptake rate of carbon, about 2 Gton / yr, one arrives at the erronious result that the fossil carbon will go away in 100 years. This result, analogous to that presented in this paper, is wrong because as the carbon invades the ocean, it alters the buffer chemistry of the ocean, and hence further uptake of CO2, by depleting CO3(2-) ion. Complete drawdown of the CO2 awaits first restoration of ocean pH by CaCO3 balance, and ultimately CO2 fluxes from volcanic emissions and weathering of silicate rocks. relatively warm upper ocean and a cold deep ocean. The observed distribution is a result of accounting for the downwelling of cold water in polar regions and its gradual upwelling over much of the area of the lower-and mid-latitudes. And it is also the case that simply representing the upper ocean as a wind-mixed layer without accounting for transport along isopychnals from near the surface in mid-latitudes down to several hundred meters in the lower latitudes is not likely to be adequate. These ocean influences can all be represented with a quite simple parameterization without having to include a globally finely resolved ocean representation. The proposed ocean model basically ignores the well-established understanding of the ocean circulation and instead includes only a one box deep ocean. Were this the approach used in a climate model, the equilibrium result would be a deep ocean and mixed layer having the same temperature-this is not only not the case in the real world in terms of the actual temperature, but even if trying to model only the perturbed temperature, one would not expect the change in the upper and lower temperature to be the same. And since CO 2 is transported in the ocean by the same ocean circulation as the heat, on would not expect that the CO 2 is being correctly being moved around in the ocean. Despite this situation, the author goes to great ends to defend his one box approximation for the deep ocean, citing papers not really relating to the type of use he is putting the model, basically saying global models are too uncertain. Rather than what has been done in simplifying down climate models, which calibrate themselves to the complex models, this author basically claims to be sufficiently correct. Given all the time that has been put into this paper, it would have been quite easy for the author to actually investigate if his approximation and a slightly more complex model would give the same result. He could readily put in a one-dimensional ocean model with a polar downwelling pump as is done in the MAGICC climate model (and has been done in many other simple climate models). Another useful test that could be done is to test the dependence of the results on the assumed depth of the ocean. Detailed models and observations both suggest that downward mixing of heat in the ocean goes only as deep as near the depth of the thermocline (so maybe down to 750 meters total or so, including the mixed layer) and cannot go deeper because of the upward ocean movement in low latitudes as a result of the polar downwelling. With radiation transport not a process in the ocean, it is the circulation that carries the heat around-and so could also be used to carry the CO 2 around. If such a big claim is going to be made about the quite short lifetime of a CO 2 perturbation if emissions are suddenly halted, there is just no excuse for not really doing the extra modeling to prove his choice is justified. Unless one reads very carefully, it is hard to find out that really what is happening after emissions stop is a redistribution of the total amount of injected fossil-fuel CO 2 among the various reservoirs, leaving the atmosphere with a percentage of its peak excess concentration persisting essentially indefinitely-the atmospheric perturbation due to fossil fuels would not naturally go back to zero until all the emitted CO 2 is taken up in ocean sediments or permanent land carbon. The IPCC has a simple representation of the decay in the atmosphere reservoir that involves, I think it is, five (or maybe four) separate exponentials, each one accounting for the time it takes for the carbon to mix into each reservoir and the coefficient being the share that would remain in each reservoir out to quite long times. The paper does not present the IPCC approximation nor explain how its results compare for each reservoir, and it would be informative to do this. It is strange in Figure 7 that the amount of carbon in the terrestrial biosphere looks to remain constant at the amount that is in these reservoirs at the time of the peak CO 2 concentration, such that, apparently, a reduction in the atmospheric CO 2 concentration does not lead to a reduction in the terrestrial carbon reservoir. This seems very strange given that the equilibration is quite rapid as the CO 2 concentration is increasing (FACE experiments seem to show that equilibration for shrubs and grasses is within a few years; trees would take longer). This needs to be checked. Normally, it is thought that as the CO 2 concentration drops, CO 2 will be, in essence, exhaled by the biosphere back into the atmosphere. It would also be worth looking at the transfer from the living biosphere to the long-lived biosphere-this is normally thought to be a pretty slow process. Perhaps I missed it, but it would be interesting to have indications of how much carbon end up in each of the model's 5 reservoirs in the decades and then centuries after the ending of emissions. It is also not clear if there should be a limit on how much the terrestrial carbon can build up in the terrestrial biosphere given that the areas of growth are finite, the supply of nutrients is finite, trees have genetic limits on how large they can become, and one cannot just squeeze in more and more trees-density is limited. It would be interesting to see model results compared for more cases than simply a sudden cutoff of peak emissions to zero. So, what about a gradual reduction in emissions? What about if emissions continue? This is important because emissions are not going to suddenly go to zero, but will decrease slowly and so there will be very little adjustment right at the zero point as the adjustments have been going on all along as the emission levels drop. As near as I can tell, the model looks at the perturbation of the carbon cycle due to humans. It would be interesting to know if one simply put the amount of total carbon in the various reservoirs in the preindustrial period if it would distribute to the reservoirs as observed and if this would be a steady state condition. And then, again from baseline amounts, it would be interesting to know what the distribution might be for some larger amount of carbon. Given that the end point of the study described in the paper is the redistribution among reservoirs, is the model set up so the observed distribution would result were the preindustrial total loading of C being the initial condition-right now it appears that the preindustrial distribution is prescribed rather than used as a test of the model representation. Note-I have not reviewed the appendices nor examined all the equations in detail, generating my comments based on the text that is presented; and this was all done pretty quickly. Specific Comments and Thoughts/Suggestions (noted as they come up, and sometimes repeated if points arise again): Lines 23-24 (and elsewhere): My understanding of the current approach is that, in representing the decay time in one equation as opposed to a model with each process separately represented, there are five or so time-decaying exponentials, so separately into the biosphere, surface ocean, deep ocean, sediments, soils, etc. and so characterizing the time as if it is one number is just not a correct characterization of how the number is currently viewed. That is, there will early on be a short time constant for emissions into the atmosphere to redistribute some into the mixed layer and living biosphere, but that once this occurs, the time constant will be much longer for the redistribution into the deep ocean and long-term terrestrial biosphere. Lines 28-29: A bit strange to be quoting Ramanathan when it was Revelle, I think it was, who talked about this as a 'great geophysical experiment.' I don't know reference for this, but it might be the 1965 report of the President's Science Advisory Committee (PSAC) in the chapter (or annex) on climate change that Revelle chaired or in his paper with Harmon Craig, I think it was, on ocean uptake of C-14 and/or other species. Line 34: I don't really like the idea of saying just "anthropogenic CO 2 " as if the actual molecules are different than the CO 2 in the air. Actually, the adjustment that is occurring is based on all of the CO 2 /carbon that is in each reservoir, and one cannot just do a difference of the anthropogenic CO 2 amounts. I think this is important to be careful of because, on the other hand, the C-14 generated by the nuclear testing is so dominant in amount that it is, as I recall, most of the C-14 in the atmosphere, and there is also radioactive decay going on of the amounts over time. Lines 40-41: Are there really estimates that suggest the lifetime of the perturbation is only a few years or a few tens of years? Those seem very, very low if one is talking about the lifetime of the full increase in the anthropogenic loading. I do understand that some experts do say that there would be a fast initial drawdown, but this would not take out anywhere near all of the overall anthropogenic loading, as I understand things. So, is this a comparison of apples and oranges? I see paper does cover this somewhat in the following sentences, but it seems to me confusing to be giving a time constant based on an initial slope (based I would imagine on how the airborne fraction allocation is going on) that does not lead to all of the perturbation being removed. Line 54: CO 2 does not really "decay"-it appears as part of different compounds or in different forms, but does not really disappear the way C-14 does. Strange word choice. Line 64 (regarding Figure 1): It appears that the approach does not r account for the increasing time it takes to mix into the deep ocean and then into the sediments, so that the atmosphere gets to its adjustment time quite quickly. This seems incomplete because the time constants for spreading through the longer-term reservoirs do not seem to show up as does appear in other model estimates. This seems a bit strange. Lines 83-84: On the relatively rapid response of surface temperature, that is really because of the relative magnitudes of the heat capacity of the various reservoirs. So, the atmospheric heat capacity is equivalent to less than three meters of ocean water, and so with the upper ocean being roughly 100 meters deep, the atmospheric temperature tends to follow the upper ocean temperature. The deep ocean is of order four kilometers deep and the flush through time of the upper ocean is of order 25 years, but for the deep ocean is more like 1500-2000 years. And then there is the fact that the IR emission at the surface of the ocean will shorten the temperature adjustment time as well, so not surprising. It is not clear to me that this can serve as an analog for the carbon cycle adjustment time in that the relative amounts of substances in the various reservoirs is more even and there is no real loss term the way there is an IR loss term for the energy. Line 103: As I think I have said before on this approach, I think the two-layer approach to representing the upper and deep ocean has been shown to be fatally flawed in representing the ocean-basically, over time, the two boxes would tend to have, for example, the same temperature as opposed to a warm ocean on top and a cold ocean below. That is why the way the simple models are set up is with a one column ocean with multiple layers and slowly rising waters, and then a pipe from the upper ocean directly to the deep ocean to represent polar downwelling of cold dene waters (see the explanation in the documentation of the MAGICC model). And for carbon in the main column with its multiple layers, one would have to represent carbon going down a bit by biospheric action and then dissolving on the way down and being carried back up. I also wonder how you are (or are not) representing the compensation depth (so the depth at which CO 2 tends to dissolve from sediments, and that is being affected by ocean acidification). So, just to note, I am already now very suspicious of the approach. Given that such an additional representation would not add substantial time or complexity to your model, I'm surprised that you have not done this. Lines 115-117: Just to note that virtually all of the observations to be used for constraining the model are for a situation where the CO 2 is increasing, and assuming reversibility seems to be a rather significant assumption (e.g., waters are thermally stratified-and advection is not really reversible, etc.). So, if you had an upwellingdiffusion model (so downward polar pipe and upwelling column), there would not be the same reversibility that the model now seems to have. And, I might ask, is the living biosphere flux reversible-will a lower CO 2 level lead to a lower amount of biomass, as one would expect? Lines 163-165: I agree there is much confusion, especially about the difference between the lifetime of a particular molecule of a substance and the lifetime of the perturbation to a concentration in situations where molecules of the substance are going in both directions across an interface. There is also difficulty when the substance is spread among multiple reservoirs that each have particular and quite different exchange times (hence the IPCC's five or so decaying exponentials). Lines 172-178: Well, that is a start at the problem-there are then multiple components and varying types of transfer processes, not just diffusion, etc. Lines 226-227: But is the labile exchange time the same in both directions? It is quite fast if one has new plants growing, but once created, decay can take time and I'd think there can be rather long lag terms if one grows actual trees-which might keep growing even though the CO 2 is down a bit. I'm just not sure that I agree there is an equivalent exchange in both directions-yes, for leaves, etc., but I'd not think that the case for new wood that is created, which could have a hundred-year return time. Line 272: I'm just not up to trying to work through all the equations, so will be responding to the text. I'd just note that the labile cycle of uptake as leaves and wood would be different, as would the times of decay-I'm just not at all convinced such a simple model will be sufficient (so a caveat I am holding, waiting to hear about the tests being run). What is normally done, as for MAGICC, is to calibrate the simple model versus complex models. While I understand you want to calibrate directly, I think there is really the need to explain why the difference with the model is occurring and whether the assumptions made in gong to the simple model capture this. Lines 280-285 or so: The problem with using a half-life or 1/e times is that even small amounts of the perturbation affect the climate-there really is not a tolerance level. Line 291: Given how much discussion you have here I'm surprised that you don't present the IPCC decay function, so the sum of five (or so) exponentials for comparison-does your model agree with the decay times the IPCC has for each of the terms and for the fraction going into that reservoir? Line 317/Line 3010: Just a note on Figure 2 that the number 120 for F al has the 1 sort of lost due to some sort of cropping so looks like 20. Line 360: Ah, interesting, and good to hear of the separation. Lines 423-424: Treating everything below the mixed layer as the deep ocean seems to me to be a serious oversimplification, given how the isopychnals really control downward transport in middle and low latitudes, allowing a good bit of horizontal mixing from surface ocean to depths down to several hundred meters, but not down below 750 meters or so due to the upward motion that is balancing polar downwelling. Line 446: I agree this is a curious anomaly. I started drafting a note for AGU/EOS many years ago but never got to completion. One of the interesting thoughts I heard about it had to do with possibly the spreading of weeds, etc. over croplands as a result of so many men from farms being pulled into the military (as I recall, the flattening started early in the war years), and then perhaps a change after World War II as weeds were cleared and then in the amounts of cropping of C3 versus C4 plants and relative carbon uptake. This all got to be more than I felt that I could get into and so I sort of abandoned the paper-but it seems to me finding an understanding of this pause (which did not seem to be in emissions) might be very insightful. Perhaps with your model, that could be something to look into. [And I should note that I think there is a paper or more looking at this in detail, perhaps by an Australian author or two.] Lines 467-468: I really don't like this notion of a piston velocity to represent the links between the mixed layer and deep ocean. I'd really suggest trying a different representation of the ocean and seeing if your conclusion holds up. Line 476: On the flushing time of the deep ocean, 650 years seems lower than what Broecker, etc. have talked about, which I think is more like 1000-1500 years or so. And I'm not sure that flushing time is the right way to think about it instead of as more like a pipe that it takes 1000-1500 years or so to pass through, so the increased uptake of CO 2 does not become available to the atmosphere until after that much time [though isopychnal mixing does lead to some higher amount of CO 2 down several hundred meters (or even more) in the lower latitudes]. Again, I think that the ocean component of the model needs to be upgraded. And I'd also note that I don't think ocean processes are simply reversible-time constants on that will apply. Line 486: So, if you change the ocean to a deep ocean with a pipe going down from the surface in high latitudes and then the water spreading out, this provides the basis for slow upwelling to occur as new amounts of downwelling water push underneath and lift it up. So, the downwelling flow isa result of dense, cold water sinking and the CO 2 amount is based on how much CO 2 can be held at equilibrium by the cold water with the increased atmospheric concentration. This downward transport of CO 2 would then increase only slowly if the overturning circulation stays the same, even with the atmospheric concentration going up. And the water coming up into bottom of the thermocline layer (at 750 meters) would be staying at about the same loading as preindustrial due to the long circulation time. This would all be very different than how things work with a piston type approach to representing the downward flow. An addition advantage of actually representing the over-turning circulation would be that you could then do experiments changing the amount of the ocean overturning (with some suggestions that this overturning amount would be caused by Atlantic surface warming and reductions in the amount of rejected dense brine water as sea ice freezes-and some indications this change is already occurring). Were the overturning circulation to stop, the net uptake of CO 2 from the atmosphere and downward transport would necessarily go down a lot (also less upwelling of nutrients into the mixed layer to feed the biospheric pump of C to the deep and intermediate ocean). With your piston approach, there would not be such a reduction in the flux-it would just keep going (wouldn't it?). I guess what I would recommend is to apply your model framework to the energy in the system and see if the resulting vertical temperature distribution would result-basically, the transport of CO 2 and heat occurs in the ocean in the same way (there is no radiation term to have energy jump from one layer to a much different layer), so whatever structure you have should work for both-and getting the deep ocean to be cold won't occur with a piston velocity. Lines 520-523: Here I totally disagree (or misunderstand-perhaps you are just referring to the amount of wind-driven exchange that is going on-and not the net transfer). The transfer from the atmosphere to the mixed layer depends on the gradient in the CO 2 concentrations and I don't understand how you can say this is constant given the annual increase in the CO 2 concentration has changed over time due to changes in emissions (is not what you are doing assuming instant equilibration of ocean and atmosphere instead of allowing a gradient to form and drive the flux into the ocean?). Also, with the colder (saturated) water rising up in lower latitudes, it emits CO 2 as it warms and then in high latitudes, as the ocean waters cool, they take up CO 2 . So, I just don't understand how you can say the gradient will be constant over time. And as the overturning circulation changes, there will be a change in the flux-so how can you make an assumption about it being constant? On the mixed layer to deep ocean fluxes, etc., there are the downward flux flows (in the downwelling waters and the biological pump) and then the upward flux flow in the slowly rising waters that went down long ago and are now rising with the CO 2 burden of the past. I think each of these fluxes needs to be kept track of separately. Also, given that the biological activity is dependent on the amount of nutrients that are carried upward-if the overturning circulation changes, then the flux of nutrients change. I get the sense that your model would work reversibly whereas with a more complete representation, this would just not be the case-things are not just instantly reversible. Lines 586-587: Nice to use a transfer coefficient for heat, but just to note that your ocean would not lead to a cold deep ocean, and so it would really be inappropriate to be using heat influx for transfer into the deep ocean-you really need to do a better representation of the ocean. Line 655: On the issue of CO 2 fertilization, there are also limits imposed by the supply of nutrients and of water, and so there would seem to be a need to be very careful about this, especially as the area of the relatively dry subtropics is growing. Lines 688-691: So, have you run your model from preindustrial concentrations into the future assuming no emissions, such that the model holds the CO 2 concentration constant? I would think that this would be something to explain at the start of this subsection as proof that the model is stable in the absence of emissions. Again, I'm not really clear on how you are calculating the fluxes into the ocean and biosphere. So, are these fluxes being driven by the gradient created each year by the emissions-if so, then if emissions go to zero, there would no longer be a gradient, and so how would the flux continue to be the same? Or is this turnover time you calculate based on the current flux rate driven by the annual emissions. I just don't see how this number of 44 years accounts for the fact that as emissions go to zero, the gradient driving the flux would go down and so then would the flux? So, shouldn't the net flux from the atmosphere to the mixed layer go down exponentially over time, and do so quite rapidly as you are assuming a quick adjustment time of the atmosphere and mixed layer? Now, it might be that in your model the mixed layer to deep ocean flux would stay the same and so this would keep pulling down the mixed layer concentration and so then sustain the atmosphere to mixed layer flux. If so, then, again, making sure that you have the ocean exchanges properly represented is critical, and as I've said, I don't like the ocean circulation that you have. Basically, what will happen over a thousand years or so is the amount of C would reequilibrate so that the total fossil fuel burden is spread through the upper and deep ocean (and as that process pulls down the CO 2 , it will pull CO 2 back out of the labile biosphere reservoir). So, on your rate, is that an initial rate? I don't see how that can persist as the atmospheric concentration is pulled down as the time constant for mixed layer and deep ocean is so long. In this regard, it would also be interesting to know what sort of steady equilibrium occurs with different total amounts of C in the combined set of reservoirs. So, for the amount in the preindustrial world, the equilibrium is 278 ppm (say), so would your model come to that equilibrium at that level with the total amounts of C in the non-sediment reservoirs? What about with 50% more? Or are your equations all based on the departure of the system from the 278 ppm base level, so you will inevitably come out at that if the distribution among reservoirs is not started as it was observed to be (so, what would happen if you put all the C in the deep ocean at t=0 and then ran to equilibrium; what about if started with all C in the atmosphere-would the preindustrial CO 2 level result? Lines 700-701: Okay, so one exponential is the time for the atmosphere and mixed layer to equilibrate. This does not get rid of the overall perturbation. Line 708: Missing word-should be "it is useful" Line 726: Typo-should be "the situation" Line 736: Just to note that the water coming up can be supersaturated in CO 2 as a result of the dissolution that occurs as the biological pump is working so that CO 2 keeps getting recycled up. Presumably, as the CO 2 concentration goes down, this will be affected as well. Again, I think there is a real problem in an overly simplified ocean model. Lines 750-751: Does this large flux back and forth allow for the seasonal build-up and release of C by the TB as is indicated by the annual cycle of the CO2 concentration as seen in the Mauna Loa record? Might that variation suggest you need to subdivide the LB box? Line 756: This just can't be the case for trees growing? The FACE experiments might suggest there is a few-year equilibrium for weeds and grasses, but I don't see how this could be the case for wood as it would take much longer times for a full forest to come to equilibrium. So, again, perhaps the LB needs to be subdivided. Line 806-808: So, you have no removal to the sediments, and so there is no ultimate sink of the C, all there will be is a redistribution among reservoirs-is that correct? If so, you will never get back to preindustrial if you have added C emissions. So, then, you might have a decay rate, but the level will never get back to preindustrial? Is that correct? Is there any limit on how much the obdurate reservoir can build up? Basically, there is only so much land for buildup to occur unless you have a sink to peat and eventually back to fossil fuels. So, how much can really build up (are there limits due to nutrients, etc.? Should you have a return term based on wildfires? etc.?)? And, using the piston approach, the vertical distribution of the excess CO 2 will be wrong (well, in fact, the single box does not have a vertical distribution-but, in essence, any amount taken up will instantly be creating a back flux, which is just not how the ocean works. Line 816: So, in setting a S a eq, does this mean this is a perturbation model and is not based on gross amounts in a reservoir, such that you will return to the preindustrial value even if you through emissions add a large amount of C to the system that would, one would think, end up as distributed among the various reservoirs? I also don't understand why the difference would be with respect to the equilibrium value and the difference term in the fourth term in the equation-again, why is the difference done with respect to the concentration in the previous time step (so the emissions increase the atmospheric concentration and then this increases the mixed layer loading, etc.-what is the equilibrium value and why is it used?). Line 817: Why is not the flux simply based on the gradient between the atmospheric and the mixed layer values-what are these equilibrium values? Line 823: Something is missing-"in" what? Line 903/line 3066: In Figure 7a, I do not understand how the amounts in the TB stay identical once the emissions stop. Over time, C will redistribute to the deep ocean, and this will lower the atmospheric concentration, and this will pull the labile C down pretty quickly and eventually the obdurate C-how is it that these stay identical over time? Line 1135 or so: Have you also run comparisons of the models into the future with emissions continuing to see how you match or don't? This would seem interesting to see as well. Lines 1186-1188: So, this is not really clear from the earlier discussion, namely that the atmospheric level, for example does not decay to zero, but to 16-22% of its peak value. That would have ongoing climatic effects and so the temperature would not return to preindustrial. Basically, there is a distribution of the fossil fuel carbon among the reservoirs, which is just how IPCC represents things in its five-exponential (I think it is) equation. I'd like to see how your model results for each reservoir compare to the term in the IPCC equation for that box. I'd guess the ultimate distribution might be similar-well, except I just don't like how your ocean is represented. Just a note here that I would like to see how your model performs versus carbon cycle models assuming that global emissions go down over several decades to zero (so say, to net-zero by 2050). Are the results similar or different? Lines 1347-1349***: Given this result, it would seem that the interactions and links between the mixed layer and deep ocean need to be done in a much more representative way than is done in your 5-box model. We know, for example, that such a simple model would not explain the cold deep ocean and warm surface-and that this feature is a result of how the ocean circulation is represented (there is not radiation transport, which along with convection is needed to explain the atmospheric structure). In that CO 2 is carried along by the circulation (save for the biospheric pump), I just do not think it appropriate to use a formulation that just does not appropriately represent the ocean circulation. There is just too much of a chance that the result you get, being different than the more complete models, is a result of the inadequate representation of the ocean circulation to accept your results as a serious challenge to the models. Lines 1457-1459 and following: Might this difference be because your model leaves the amounts in the terrestrial biosphere constant in time, which just seems wrong. Might there be a term missing to reduce those over time as the atmospheric concentration declines? Going back to Figure 7, your model seems to have the TB indefinitely holding a very large fractional increase in amount as a result of the fossil fuel emissions-and the fraction seems so large it is just not clear there is enough land for this-or perhaps all trees have to grow as tall as redwoods or something. A persistent increase of the amount shown just does not seem plausible to me. Maybe what your biosphere model needs is to have the age of trees limited to some number and then those trees decay and new trees grow up with a lower CO 2 concentration; one just cannot keep having such high values (so is your algorithm based on the net mass of the trees or the fact that there is an ongoing exchange going on all the time, with average tree life perhaps, say, 40 years or so as many trees that try to grow die off as others succeed, etc.)? Line 1594: But note that MAGICC has a much better representation of the ocean!! Line 1601: But you are going much further out toward equilibrium, so not sure a C-14 result is really applicable. With gradients, most will surely be in the ocean, so what is so hard to figure out about that. Line 1614-1615: I don't recall the models in the papers cited to know how they dealt with the oceans. Very clearly, one cannot get the baseline ocean temperature distribution with the way you have done a two-box model, and it would be simply incorrect to think that the end equilibrium temperature distribution doing it with just the perturbed amount of heat to think that the temperature anomalies in the mixed layer and deep ocean would be the same. I just don't think you can justify this model for the time period you are talking about. It seems to me you just can't claim this-you really need to justify this by putting in a better representation to get results and then show that the simpler model is adequate-just saying so and citing a few papers with models being used for a different substance is just not compelling or convincing to me. It would not be hard to put in an ocean representation that is better (could likely be done in a time a lot shorter than reading your quite long paper)-so I think you should just do it. At the very least, make your deep ocean box only 650 or so meters depth so it only includes down to the thermocline. Lines 1618-1622: It is one thing to start with a complex model and work to simpler model, checking as you go that the results match. It is quite different to start with a simple model and just sort of claim it works. Lines 1627-1628: You have chosen a model form that is known not to yield a good representation of the temperature distribution or of the perturbation to temperature that the models get. Even though there are uncertainties, your ocean model is just known to not represent the effects of the ocean circulation, which is what is essential to be representing. Lines 1657-1665: I'm sorry, but given the ocean representation you've chosen which does not adequately represent ocean circulation, this is just not convincing to me (and there seems to be some problems with the terrestrial biosphere representation as well). Line 1706: At this point, I'm going to pass on going over the appendices-this paper is really quite long, especially Section 8, and quite a challenge to get through.
8,516.2
2021-12-14T00:00:00.000
[ "Environmental Science", "Geology" ]
Design of a Lightweight, Cost Effective Thimble-Like Sensor for Haptic Applications Based on Contact Force Sensors This paper describes the design and calibration of a thimble that measures the forces applied by a user during manipulation of virtual and real objects. Haptic devices benefit from force measurement capabilities at their end-point. However, the heavy weight and cost of force sensors prevent their widespread incorporation in these applications. The design of a lightweight, user-adaptable, and cost-effective thimble with four contact force sensors is described in this paper. The sensors are calibrated before being placed in the thimble to provide normal and tangential forces. Normal forces are exerted directly by the fingertip and thus can be properly measured. Tangential forces are estimated by sensors strategically placed in the thimble sides. Two applications are provided in order to facilitate an evaluation of sensorized thimble performance. These applications focus on: (i) force signal edge detection, which determines task segmentation of virtual object manipulation, and (ii) the development of complex object manipulation models, wherein the mechanical features of a real object are obtained and these features are then reproduced for training by means of virtual object manipulation. Introduction The capacity to measure forces exerted by a person during manipulation is highly valuable in relation to the performance of real manipulations [1] or virtual object manipulation [2]. The aim of the project described in this article is to show the capabilities of contact force sensors in estimating forces exerted by a person in both real and virtual manipulation tasks. Force/torque transducers or load cells are usually used to obtain these types of measurements [3]. These devices are very precise; however, they have inconveniences, mainly related to their weight, volume, and high cost. The project developed in this paper focuses on using contact force sensors to estimate manipulation forces. Specifically, the contact force sensors referred to in this paper are FlexiForce model A201 by Tekscan Inc. This type of sensor acts as a force-sensing resistor, such that when the force sensor is unloaded, its resistance is very high, and when force is applied to the sensor, this resistance decreases. Several types of material show this kind of behavior, from metals to semiconductors [4]. The principle drawbacks of these sensors relate to low repeatability and hysteresis. Nevertheless, contact force sensors and piezoresistive sensors provide an effective solution for varying kinds of applications. Examples include: the detection of collisions in human-robot interaction [5], the measurement of manipulation forces [6][7][8], biomedical applications [9,10], and minimally invasive surgery [11] due to the sensors' highly reduced size, weight, and cost. This paper proposes the design of an end-effector for haptic devices that incorporates contact force sensors in order to estimate manipulation forces when interacting with virtual or real environments. Haptic interfaces are force feedback devices that enable bidirectional human-system interactions and provide the operator with force information from this interaction while simultaneously capturing the operator's motion or force input [12]. These devices can be used to interact with virtual (virtual telepresence) or real (telepresence) environments. The principle applications of these systems concern teleoperation in remote environments such as tele-surgery or tele-manipulation, and virtual applications for advanced manual training techniques such as medical procedures or rehabilitation exercises. It is widely accepted that haptic devices benefit from force measurement capabilities in terms of the reduction of device dynamics or an increase in the fidelity of the forces exerted to the user [13]. However, the use of load cells in haptic devices is limited as a result of inertia and cost considerations. Katsura et al. [14] note that the limited bandwidth and high cost of force sensors hinder their widespread acceptance in such applications. Previous studies use expensive, high precision, and high weight force/torque sensors attached to haptic devices to provide more reliable forces in complex tactile simulation applications [2] or for measuring and recording the material properties of soft objects [15]. Moreover, in the case of the aforementioned complex tactile simulation applications, the authors [2] recognize the importance of force sensing in haptic devices and point out that the use of force sensors significantly increases the price of their applications. Nonetheless, previous projects using haptic interfaces do not take into consideration the use of cost-effective thimble-like sensors as contact force sensors for estimating normal and tangential forces. Our work aims to develop a lightweight end-effector that provides force measurement capabilities to commercially available haptic devices. This article is organized as follows: Section 2 describes the principal characteristics and requirements that a sensorized thimble must comply with in virtual manipulation applications that measure the force exerted by a user. These requirements are comparable to those used for the manipulation of real objects. Section 3 describes the mechanical design of the thimble and the location of the contact force sensors. Section 4 focuses on the calibration of the contact force sensors by means of a load cell and the calibration of the thimble with the four contact force sensors. The proper calibration of the sensors does not guarantee in and of itself the correct performance of the thimble as a whole, since finger deformations occurring as a result of an increase in the grip force need to be taken into account. Section 5 provides various examples of applications in which the sensorized thimble has been used with positive results. Finally, in Section 6 some conclusions about this project are provided. Principle Requirements for a Thimble-Like End-Effector for Haptic Devices In some cases, the thimble-like device has opposing compliance requirements for manipulation applications. Therefore, it is necessary to establish which objectives are most important and take into account that other objectives may be compromised. In haptic applications, the thimble represents the end-effector of the device. This means that the size and weight of the thimble must be as light as possible. When manipulating real objects, similar conditions exist given that, as the thimble increases in size the distortions in the forces applied become greater. Ideally, the thimble should not interfere with the user more than that of a medical glove. In sum, the principle requirements for a thimble-like device are as follows: 1. It must be adjustable to different-sized fingers, 2. The thimble must be as light as possible, 3. The user must feel comfortable using the device, and 4. The force securing the unit to the user's finger must not affect the user's perception. The first requirement concerns whether to choose a thimble that can be adjusted to the user or to have a set of thimbles of different sizes thereby allowing each user to wear a more adequate thimble. The availability of different-sized thimbles initially appears advantageous. However, this is only advisable if the thimble is mainly used by the same person. For applications having several users who require different thimble sizes, the constant changing of thimbles is inconvenient and results in the deterioration of mechanical and electrical connections. The resulting loose connections and malfunctions in the electrical contacts lead to poor performance of the device. Thus, it is preferable to use a thimble that can be adjusted to the user's finger. More components must be added in order to achieve such adjustments and is a drawback to the abovementioned second requirement. Nonetheless, the adaptability of the thimble to different users has priority. The second previously mentioned requirement refers to the design of a thimble that is as light as possible. As we mentioned earlier, the thimble is located at the end-point of the device; therefore a minor additional weight could significantly increase the inertia (proportional to the square of the distance). In haptic interfaces, high inertia can lead to a dynamic distortion of the user's perception [16,17]. This is the main reason why contact force sensors were considered in this design as opposed to traditional load cells. A load cell consists of a gauge inside a metal case, which results in a heavy and bulky design. The third and fourth requirements relate to ergonomic considerations in order to achieve the least possible deterioration in the user's perception. The force to which the thimble is adjusted in relation to the user's finger is a critical factor in the performance of the thimble. This force must be minimal in order to avoid causing discomfort to or distortion in the user's perception. However, the attachment must be sufficiently firm in order to ensure that the thimble is not dislodged from the finger during manipulation. Therefore, depending on the task performed and the force capabilities of the haptic device, the tightening mechanism between the thimble and finger may be adjusted. Sensorized Thimble Configuration The thimble is designed to fit different-sized fingers by means of a screw system that adapts to the sides of the finger. Velcro is also used to hold the finger to the thimble. The thimble has a cone-shaped design, narrow at the top and a little thicker at the bottom at the point at which the distal phalanx joins the beginning of the middle phalanx. This geometry is similar to the human finger, thereby allowing natural human-object interaction during manipulation. The thimble design is shown in Figure 1. The thimble was made out of an epoxy resin in order to reduce its weight. A technique known as stereolithography rapid prototyping (or stereolithography) was used in the manufacturing process [18]. Consequently, the thimble weighs 76 grams, which makes it ideal for virtual and real object manipulation. That is, lower weight implies lower inertia and, therefore, less interference with the user's weight perception. The thimble includes four Flexiforce contact force sensors manufactured by Tekscan Inc. [19]. These sensors are used for estimating normal and tangential forces at the fingertip. Contact force sensors only provide the normal component force applied to its active area. Sensors are located in the thimble, thereby enabling the measurement of normal and tangential manipulation. Normal manipulation force is provided by the sensor located in the fingertip, as shown by the "Sensor 1" label in Figure 2. Three additional sensors are placed on the sides of the phalanx in order to estimate the tangential manipulation force. These sensors are labeled as "Sensor 2", "Sensor 3", and "Sensor 4" in Figure 2. This thimble is designed to fit a particular haptic device. Figure 2(b) shows how the thimble is attached to a two-finger haptic device called the MasterFinger-2 [20] by means of a screw system. Sensor Interaction Due to Finger Deformation In addition to the manipulation forces, the force used to secure the finger to the thimble affects sensors 3 and 4. That is, in addition to this clamping force, a small deformation in the finger appears when an object is grasped. This deformation significantly increases the pressure applied to the side sensors. Due to the symmetrical geometry of the thimble, finger deformations create the same force on either side sensor. These effects must be taken into account when estimating the real tangential force. The finger deformation effect appears when force is applied by the fingertip. For instance, when pressing a surface in a normal direction, the sensor located at the bottom of the thimble (sensor 1) should be the only sensor to detect such forces. Nonetheless, forces are also detected by sensors 2 and 3 due to finger deformations. The effect is illustrated in Figure 3(a): a cylinder weighing 4.5 N was used to show forces caused by finger deformations within the thimble. The user applies a normal force to a rigid surface. Consequently, sensors 2 and 3 measure similar forces resulting from the symmetric deformation of the finger. In Figure 3(b), the cylinder is grasped horizontally using two thimbles. The sensor located under the fingertip measures the normal force that is applied to hold the cylinder, whereas the tangential force is estimated by the sensor located at the upper sides of the finger and equal half of the total weight of the cylinder, since two thimbles are used. The data provided by the sensors is shown in Figure 3(b). The highest force value is the one measured by sensor 1 (corresponding to the normal force exerted by the user). The subtraction of the two measurements from the lateral sensors represents the real tangential force. The total weight obtained is 4.3 − 2.0 = 2.3 N, which is a close approximation to half of the cylinder's weight. Thus, the measurement provided by the contact force sensor with this configuration represents a close approximation to the expected value. Sensor Assembly and Its Calibration A A201-25 Flexiforce sensor from Tekscan [19] with a range of 0 to 110 N was selected. The thickness of the sensor is approximately 0.2 mm and is very lightweight. According to the manufacturer specifications, the repeatability is ±2.5%. The active area of this sensor consists of an ultra-thin and flexible printed circuit located in a circle with a 9.53 mm diameter. Its behavior is similar to a variable resistor. When the sensor is unloaded, its resistance is very high (greater than 5 MΩ). This resistance decreases when force is applied to the active sensor area. The electronic circuit recommended by the manufacturer is shown in Figure 4. The applied force must be homogeneously exerted over the active sensing area in order to guarantee proper sensor repeatability. For this reason, a "sandwich configuration" mechanical assembly was used to properly transmit forces to the sensor's active area. This mounting consists in placing the sensor in the middle of two cylindrical metal sheets that are located over the active sensor area, as shown in Figure 5(a). The cylindrical metal sheets must have a very flat surface in order to obtain proper repeatability. This "sandwich configuration" guarantees mechanical isolation between the finger and the thimble, since the user force is thoroughly transmitted to the sensor. This configuration has been successfully tested and used in many experiments. The assembly previously described (disk+sensor+disk) was calibrated by means of a high accuracy six-axis force/torque sensor manufactured by ATI Industrial Automation, model Nano17 [21]. This sensor provides six-dimensional force components (forces/torques). It consists of a monolithic transducer with a silicon strain gauge. A set of different weights was used to perform the calibration of the "sandwich configuration" assembly. Signals provided by both sensors (ATI-Nano 17 and Flexiforce-A201-25 in 'sandwich configuration') were processed for the different weight sets. The calibration was carried out using the least square polynomial interpolation. A monotone increasing polynomial was obtained that guarantees only one force value for each voltage provided by the contact force sensor. This polynomial interpolation improves manufacturer performance. The polynomial interpolation function obtained is as follows: f(v) = 0.0557v 7 − 0.9616v 6 + 6.5957v 5 − 22.9525v 4 + 42.9525v 3 − 41.9706v 2 + 22.0111v + 0.0648 (1) where "v" is the voltage provided by the contact force sensor and "f" is the estimated value of the exerted force. As shown in Figure 6(c), the data provided by the contact force sensor and the Nano17 sensor is very similar and has a range of 0 to 20 N. This force range is usually sufficient for common manipulation tasks. Thimble Calibration The thimble requires a new calibration in addition to the calibration undertaken for each sensor. This calibration aims to determine the precision with which the thimble applies normal and tangential forces. Thus, it is important to compare anew the data obtained from the thimble sensors with the data from F/T sensor of ATI Nano 17. This comparison allows us to determine the quality of the estimated normal and tangential forces. A new thimble containing both the ATI-Nano17 and the four contact force sensors was designed, as shown in Figure 7. This new thimble was specifically designed to compare information provided by the thimble sensors to the ATI-Nano17 [22]. Normal forces are provided by the sensor located at the fingertip of the thimble (sensor 1 in Figure 2), and correspond to the information provided by the ATI-Nano17 in the Z-Axis direction. The tangential forces are obtained from the data provided by three sensors in the thimble (sensors 2, 3, and 4 in Figure 2), and correspond to the data provided by the ATI-Nano17 in the X-Y plane. The difference between the data provided by both sensors (thimble and ATI-Nano17) is approximately ±1.43 N with a range of 0 to 20 N. Depending on the application, this deviation may or may not be deemed acceptable. For instance, in the case of training tasks in rehabilitation applications, it is acceptable because the forces applied between different practitioners vary significantly and do not require a higher level of precision [23]. For applications in which this resolution is insufficient, an adapted thimble can be used to which a high precision force/torque sensor can be incorporated. It is important to remember that this will significantly increase the price and inertia of the device. The signal variation was also checked. This comparison concerns precision in the detection of force flanks. In this case, the first derivatives of data provided by both sensors are compared. These derivative functions are calculated as follows: The standard deviation of the difference between both derivatives is equal to ±0.166 N/s. Consequently, the estimation of manipulation force flanks is much more precise than the estimation of normal and tangential forces. As shown in Figure 8, it can be observed that when forces vary, the flanks of the signals measured by the thimble and the corresponding signals provided by the ATI-Nano17 are very similar. This information may be useful for some applications such as task segmentation, event or contact detection, etc. Applications The estimation of both normal and tangential forces while performing advanced manipulation has proven to be useful for several applications including segmentation of manipulation tasks and fast modeling of virtual objects. • Task segmentation: Attention to the derivative of the force signal shows that the task can be segmented into different stages which may be useful for a range of applications including those that improve adaptive control architectures and those that bringing attention to an omitted step. -Control parameters can be optimized for different stages of the task at hand or for transmitting a unit of force whenever an event occurs, as opposed to transmitting the exact manipulation force which can be physically demanding for the operator in given situations. -Bringing attention to omitted steps in a previously defined task can increase safety in telemaintenance operations or to assist people to perform a daily living task, as in the performance of common tasks. • Modeling physical interactions: Information obtained in relation to the forces experienced by a user while manipulating a complex object can contribute to the design of a virtual model of this physical interaction. The physical modeling of the forces of a complex system in order to generate a virtual model with which the user can interact by means of a haptic device can result in complex equations that the system must solve in real time, hence requiring high performance computers and GPUs. In contrast, this proposed design shows that manipulation forces can be recorded and translated into a simulation model such as a look up table or a simple interpolation reduces the hardware requirements for haptic applications. The following section shows various applications of the sensorized thimble for segmentation and modeling virtual environments. Segmentation of Real Tasks: Manipulation of a Bottle Containing Liquid Information provided by thimble sensors is more accurate for detecting flanks than absolute values. Therefore, certain experiments related to performance detecting flanks were undertaken. This subsection summarizes the results obtained in an experiment focusing on the manipulation of a recipient containing liquid. Given the available force information, the task can be segmented into different stages: approximation, contact, grasping, lifting, tilting, and releasing. An example of task segmentation is shown in Figure 9. This figure shows the different stages and the recorded forces when manipulating a bottle containing liquid. First, the user approaches the bottle (A). When the user is in the correct position, he or she grabs the bottle (B) by increasing the force until he or she is able to lift it (C). Then, the user holds the bottle vertically (D) and starts tilting it (E) until the bottle is in a horizontal state. The user continues by tilting the bottle and then tilting it back to the horizontal state (G). Finally, the user holds the bottle vertically again and releases it over the table (I). This information can also be used to create an approximate model of this system [22], and results in the reduction of complex model equations that the system must solve in real time and that require high performance computers and Graphic Processing Units (GPUs). Modeling Physical Interactions: Body-Joint Model for Medical Rehabilitation Simulators The sensorized thimble that is described here can be used for creating an approximate model of the characteristics of human joints, which are inherently multidimensional and non-linear [23]. The system developed characterizes the stiffness of the metacarpo-phalangeal joint, which is located at the index finger in the axes of rotation. Note that even though the finger can mainly move in the flexion/extension and abduction/adduction angles of rotation, for rehabilitation of the finger, the pronate/supinate degrees of freedom (DoF) should also be considered. The finger was mobilized in the whole range of movements and the force was saved for different angles of the finger; a minimal square polynomial interpolation was calculated to approximate the joint's behavior in every rotational DoF, as shown in Figure 9. Equations (3)- (5) show the relationship between the angles and torques (force was applied at approximately 1.2 cm from the center of rotation): τ(α) = 2.2e-7·α 3 + 1.24e-05·α 2 +5.56e-04·α + 7.1e-3 (3) τ (β) = 2.04e-6·β 3 + 1.44e-5·β 2 +1.6e-3·β−5.9e-3 (4) τ (γ) = 1.68e-5·γ 3 -1.68e-4·γ 2 + 7.8e-3γ + 1.1e-3 where α represents the flexion/extension angle, β represents adduction/abduction angle, and γ represents the pronate/supinate angle, as shown in Figure 10. The minimum square error for these polynomial interpolations is 5.8e-3 Nm, 4.6e-3 Nm, and 8.6e-4 Nm, respectively, which are less than the thimble's error. These results are similar to those of previous found in the vitro study of flexion/extension stiffness using freshly frozen fingers from a cadaver [24]. This model was implemented in a simulator and potentially can be used by students to learn and practice rehabilitation procedures. Conclusions Researchers agree that force sensing at the end-effector of haptic devices improves the performance of real and virtual object manipulation. However, current commercially available impedance-type haptic devices do not include force sensing capabilities due to the high price and weight of traditional high precision sensors. This paper describes the design and calibration of a lightweight and costeffective end-effector that can be adapted to anatomical variations among users and attached to commercially available haptic devices. Contact force sensors only measure forces applied in the normal direction to its active surface. As previously described, both normal and tangential contact forces can be estimated by strategically positioning four contact force sensors inside a thimble. The proposed end-effector has a measurement error of ±1.43 N, which is sufficient for certain applications that capture and model virtual scenarios such as object grasping or rehabilitation applications. In these applications, forces applied between different practitioners vary significantly while still resulting in correct practices. Also, we have shown that the proposed design is very precise in the performance of force signal edge detection (±0.166 N/s). Force edge information can be used for task segmentation, which is useful in the undertaking of a complex manipulation. This task segmentation allows us to determine which stage a user is at of an overall task. This segmentation can be useful for reminding the user if he or she skips a step of a task or for teleoperation by only transmitting to the user force information based on events. Moreover, segmentation can also describe an application for real object manipulation. This application focuses on modeling the mechanical features of finger joints; in particular, the stiffness of the metacarpo-phalangeal finger joint in the three directions. It demonstrates that information provided by the sensorized thimble can also be applied towards manipulation of real objects. The development of these kinds of models is useful for reproducing this type of manipulation in a more realistic manner.
5,629.4
2011-12-06T00:00:00.000
[ "Computer Science", "Engineering" ]
Relation between fluid intelligence and mathematics and reading comprehension achievements: The moderating role of student teacher relationships and school bonding Several studies have shown the relevance among students of the quality of their interpersonal relationships for their academic achievement. Nevertheless, most studies available have explored the relation between the cognitive functioning and academic achievement without taking into account the quality of the relationships experienced in the school environment. Furthermore, the studies that have begun to consider the joint role of these factors in the prediction of academic achievement are scant. Therefore, it appears of relevance to deepen the relation between cognitive functioning and quality of school relationships in order to support students’ academic achievement and the potential of youth. In this paper, we examined the moderating role of the quality of student–teacher relationships and school bonding (STR-SB) in the associations of fluid intelligence (Gf) with academic achievement among adolescents (N = 219). A multiple-group structural equation modelling analysis revealed that STR-SB quality moderated unexpectedly only the link between Gf and mathematics. The findings support the idea that the quality of student–teacher relationships may be a relevant dimension to be considered to clarify the association between cognitive functioning and academic achievement. Introduction An extensive body of research indicates that academic achievement may be a key aspect of adolescents' life, as it is related to their later cognitive development, sense of competence and self-efficacy, as well as social and emotional well-being [e.g., [1][2][3].Therefore, understanding how to catalyse mechanisms promoting academic achievement is critical not only for enhancing education programmes but also for fostering positive youth development. In predicting school achievement, two distinct research lines can be traced: one line has stressed the role of intelligence [4][5][6][7] and a second line the role of interpersonal relationships [e.g., 8,9].In the former case, studies have focused attention on the role of fluid intelligence [3,10,11], which refers to the ability to adapt and deal with new situations in a flexible way, without previous learning being a decisive help.It is fundamentally shaped by primary skills, such as induction and deduction, relationships and classifications, the breadth of operational memory or intellectual speed [3,[12][13][14][15][16].In the latter case, studies suggested that to succeed in school students need to develop positive (and at least non-negative) interpersonal relationships that may represent ecological resources, which might have a positive impact on school adjustment, engagement, and achievement [e.g., 17].There is an extensive literature exploring the relation between the quality of student-teacher relationships and academic achievement [18][19][20][21].Among others, the quality of student-teacher relationships and school bonding (STR-SB) has been found to be especially related to academic success [22,23]. Even though these two research lines have worked substantially in an independent way, more recently there have been ongoing efforts to merge and comprehensively integrate the insights gained from each of them [24][25][26][27][28][29].Some contributions have enlarged the focus from cognitive to non-cognitive individual factors [24,27], as well as the complex interaction between them [30].Recent contributions have expanded the focus beyond the individual level and have targeted factors allocated on the broader environmental level, such as the classroom, the relationship with the teachers, and the family context [30][31][32].While confirming the strong association between cognitive measures and academic achievement, amply demonstrated by the scientific literature (i.e., between general domain measures and academic achievement), these recent contributions also suggest that non-cognitive measures (such as self-esteem, motivation, and quality of student-teacher relationship) could moderate such association [33][34][35][36].In the same vein, our study intended to evaluate the moderating effect of a non-cognitive dimension (i.e., quality of student-teacher relationship and school bonding) on the association between fluid intelligence and academic achievement. Fluid intelligence and academic achievements In a recent meta-analytic work, Peng et al. [3] investigated the relation between fluid intelligence (generally, labelled as Gf) and academic achievements, operationalised as mathematics and reading skills.They based this last choice on the most influential intelligence theories, i.e., Cattell and Horn's fluid and crystallised intelligence model [37,38] and Carroll's [39] threestratum model, as well as on the emphasis that mathematics and reading usually receive at school across various cultures.The findings revealed moderate but consistent reciprocal relation between Gf and mathematics and reading, with a stronger association with the former than with the latter.However, the authors also studied these relations as a function of various moderators, potentially explaining the variations evidenced in the literature.The relation between Gf and both mathematics and reading were moderated by distinct mathematics and reading skills, Gf tasks, and age.Specifically, Gf showed a stronger association with more complex mathematics (e.g., word problems) and reading (e.g., comprehension) skills than with foundational skills (e.g., calculation and code skills), Gf tasks of composite non-verbal reasoning had a stronger association with mathematics/reading compared to those of matrix and non-matrix reasoning, and all these Gf tasks showed stronger associations than visuospatial reasoning.Finally, the associations of Gf with mathematics/reading improved with age. These findings were discussed in terms of different theoretical perspectives: (a) the intrinsic cognitive load theory [40], according to which the relation between Gf and academic performance is stronger when more complex tasks are considered; and (b) the mutualism theory [41], assuming weaker relation between Gf and mathematics/reading in early development but stronger associations in later development due to reciprocal influences.Although the results of Peng et al.'s work [3] provided an outstanding contribution to the current literature, the topic was especially focused on the links between cognition and learning.Other studies and other theoretical approaches suggest that for a more comprehensive understanding of the relation between Gf and academic achievements, it may be useful to also consider different and more social/ecological moderators. The potential moderating role of STR-SB quality in adolescence The developmental systems perspective emphasises that human development inherently involves reciprocal influences between individuals and their changing contexts [e.g., 42].With regard to youth, it proposes that when their strengths can be aligned with the resources of nurturing contexts, then these strengths may be optimised in positive developmental outcomes.This might be the case for students at school.Students have a number of potential strengths, such as their cognitive resources and specifically Gf.However, these resources may be actualised in academic achievement when the school context provides a positive environment in which the students' strengths can be positively directed. In line with this perspective, Vygotsky's theory [43] claims that social influences are crucial to promote the potential of youth.Particularly, they develop their ability to independently perform and optimally use cognitive functions (as Gf) during their activity with significant adults in meaningful learning contexts, such as teachers at school.The consequences of these joint activities are usually related to academic achievement and higher school success [19,[44][45][46][47]. Although both these theories conceptualise an interaction between cognitive and relational domains, most studies have investigated them in a separate way.While past studies have mostly investigated schooling achievement at the level of the individual [e.g., [48][49][50], recent findings point to the importance of considering the role of factors allocated on a broader environmental level and related to students, parents, and teachers [19,51,52].Recent studies have shown that the joint roles of individual and environmental factors are crucial to explain academic achievement [31,32].The relevance of considering the joint roles of these factors in schooling achievement (e.g., mathematics achievement) has also been highlighted in a recent review [30].In their work, Chang and Beilock [30] showed how several studies provide converging evidence for individual (cognitive, affective/physiological, motivational) and environmental (social/contextual) factors that may explain the existence of a mathematics achievement gap between students.Individual factors (cognitive, emotional, social) interact with contextual ones in predicting schooling achievement. Among these environmental factors, we decided to focus our attention on the quality of STR-SB in light of the relevance of this dimension in the schooling career of adolescents, as highlighted by recent reviews of the literature [17].This line of study, rooted in attachment theory [53], proposes to conceptualise the relationship between teachers and students as an attachment relationship [54].Similarly, as happens in relationships with caregivers, such attachment relationship will function as a secure base for the student to explore new learning opportunities as well as a safe haven in which to regulate negative emotions in the school context.Since this conceptualisation was shared in the academic community, it has been widely tested among preschool and primary school children that the affective quality of the studentteacher relationship is an important predictor of children's academic achievement and schooling career, mediated by increased student engagement in the classroom setting [17,45,51,[55][56][57].In contrast, difficult relationships with teachers and negative bonds with school have a negative impact on students' motivation, their ability to adequately orient attentional resources, their positive participation in social learning activities, and, ultimately, their academic achievements [56,58].Particularly for children who are at risk of failure at school, an emotionally supportive relationship with a teacher can act as a protective factor and have positive effects on children's developmental outcomes [45]. Instead, mixed and scant evidence exists for the role of STR-SB in adolescence.Some scholars argue that its importance declines over the course of an adolescent's academic career, in part, due to changes in the social context: larger schools, less teacher-student interaction, and shifts in social support from teachers, peers, and parents [59].Conversely, others have shown that STR-SB quality remains important for secondary school students' achievement, and positive STR-SB is even more strongly associated with secondary school students' achievement than with primary school students' achievement [17].A recent study examined STR-SB quality and students' achievement in middle school and showed that positive STR-SB impacted mathematics achievement by reducing math anxiety [27].To sum up, considering this state of the art, we suggest that this age deserves more research attention. In the face of this scant empirical evidence, we might suggest that the influence of STR-SB changes from childhood to adolescence: later on, in fact, adolescents rely on teachers and school less than at earlier ages when they are primary relational resource.As such, instead of suggesting direct links between STR-SB and school achievement in adolescence, we might suggest that STR-SB quality becomes one important environmental variable, among others, moderating the essential links between the cognitive domain and school achievement [60]: high levels of warmth, closeness with teachers as well as a perceived positive bonds with school might support students to deploy effectively their cognitive resources to explore the learning environment, leading them to positive attitudes toward school and successful academic performance.Therefore, to contribute to the state of art of the literature, the current study focused on testing this latter possible mechanism among younger adolescents, based on an integration of intelligence theories, developmental systems perspective, and Vygotsky's theory, as already illustrated. The present study Starting from the state of the art of the above-mentioned literature, the present study aimed to test the moderating role of STR-SB quality on the relation between Gf and mathematics and reading comprehension achievements among young adolescents; that is, it focused on understanding how these relations strengthened or weakened depending on the STR-SB quality.Besides being a scantly investigated age in relation to the role played by STR-SB in school achievement, we chose this age group also because general cognitive abilities seem to be relatively more stable from this age onwards [24], despite more cognitive abilities being required in connection with new complex academic tasks [32].In any case, it is relevant to highlight that in this period of life, physical development may not yet be complete [61].In light of this and considering the possible repercussions on learning [62], measures of intelligence and of general cognitive functioning should be chosen in order to soften the impact of maturational variables.This choice would reduce the possible confounding effects on academic attainment due to fluctuations in the development of general cognitive abilities when testing for the moderating role of a third variable. To achieve our goal, we adopted a person-centred approach [63].Practically, this permitted to determine groups of students based on similarities in their STR-SB quality and to focus on how such groups may change for different students.The advantage in using such an approach is therefore to recognise the "tendency for a given person to have a distinct pattern of factors on which they are high, medium, or low" [64, p. 39], dealing simultaneously with non-linearity and interactions among variables that cannot be well described using variable-centred models [65].Furthermore, this approach allows us to maximize the homogeneity within the groups and the heterogeneity between the groups, allowing us to grasp the effects due to the achievement of certain thresholds of certain characteristics.Starting from this, we investigated how the associations of Gf with mathematics and reading comprehension achievements changed according to these different groups.We were guided by the hypothesis that groups exhibiting higher levels of STR-SB quality showed stronger links of Gf with mathematics achievement and reading comprehension than groups exhibiting lower STR-SB quality.In fact, as previously mentioned, we deemed that the potentiality of cognitive resources, such as Gf, has a greater influence on academic achievement when student-teacher relationships and school context provide a (perceived) positive environment in which the students can maximise their strengths.In testing this hypothesis, we controlled for gender.Indeed, prior work has suggested gender differences in academic achievement, with male students showing better mathematics abilities and female students showing better verbal abilities than their respective counterparts [66,67]. Participants and procedure Participants included a convenience sample of 219 sixth-grade students (54% male) from nine classrooms of one state middle school located in an urban area in southern Italy.This school serves a predominantly autochthonous community with a middle-high socioeconomic background, as emerged from the aggregate data provided by the same school about the index of family economic, social and cultural status (ESCS; see [67]): overall, the classes involved in this study presented ESCS = 0.98 (0 represents the average of the national reference population) and SD = 0.31 (expression of a limited variability).In Italy, the sixth grade represents the start of middle school, and this means that students have different teachers for different subjects; however, the teachers for the Italian language (including reading comprehension) and mathematics greatly represent the prevalent teachers with whom the most meaningful relationships are established.The sample excluded students with (a) clinical diagnoses of cognitive and leaning difficulties and (b) severe behavioural problems, as certified by mental health services.The mean age was 11.12 years (SD = .31).Ninety-seven percent of participants were Italian Caucasian, and all were Italian speakers. All participants were informed of the objectives of the study and provided informed written consent before completing the investigation.The study had the prior approval of the Local Ethics Committee (code: CEL01/18) in accordance with the principles of the Declaration of Helsinki.The involved school was selected by using an internal departmental search database including a list of local school institutions.After selecting the schools attended by young adolescents, we sent a motivational letter of presentation of the study inviting them to participate.Finally, we selected the school that showed the best motivation to participate and, above all, to continue the research, given that the study had a longitudinal design (in the current study we reported data from the first wave).All sixth-grade classes and their students were included except for a minority (< 5%) of students who met the exclusion criteria (see just above).Participants' parents were informed through schools about the purpose of this study and provided written informed consent for their children's participation.For each school class, the data were collected in two collective, very close sessions during class time within the first school quarter.Participants had 1 hour in each session to complete the different tasks and/or questionnaires and could withdraw at any time. Measures Fluid intelligence.Cattell's Culture Fair Intelligence Test (CFIT; [68]) was used to assess fluid intelligence.The CFIT is a well-known matrix reasoning instrument assumed to be independent of cultural experiences.It includes two equivalent forms (A and B).As suggested by the test manual, form A is preferable in a school setting [see, 68], and it is characterised by four subtests involving multiple-choice problems progressing in difficulty: series (12 items), classification (14 items), matrices (12 items), and conditions (8 items).A raw score for each subtest is calculated by summing the correct responses and the total raw score ranging from 0 to 46.The validity of the CFIT has long been established [24,69], and the subtest raw scores are usually taken to be strong indicators of one comprehensive latent construct of fluid intelligence; this is why in the present study we did not consider the norms reported in the manual, which have a greater utility in clinical assessment.The Spearman-Brown split-half reliability coefficient in this study was .79 for the entire measure. Mathematics achievement.The AC-MT 11-14 standardised mathematical battery [70] was used to assess mathematics achievement.This test was developed for the assessment of calculation, arithmetic reasoning, and number comprehension skills of Italian students attending middle schools.The AC-MT 11-14 tasks can be grouped into three areas: written calculation (8 items), referring to procedural aspects of mathematics (i.e.written operations, such as addition, subtraction, etc.), number knowledge (20 items), referring to the aspects of estimating the quantity and positional values of numbers (identify the largest number, transform into written digits, etc.), and mathematic reasoning (32 items), referring to mental arithmetic task (approximate calculation, mathematical facts, rounding tests, etc.).These three constructs were assessed with increasingly complex test questions.A raw score for each task is calculated by summing the correct responses, and the total raw score range is 0-60.The battery has demonstrated good internal consistency and validity [71,72].The Spearman-Brown split-half reliability coefficient in this study was .76 for the entire measure. Reading comprehension.The standardised MT battery [73] was used to assess reading comprehension of Italian students attending middle schools.This test was developed to assess the ability to gather correct information from the reading of a text independently of the contribution of decoding and memory processes.Specifically, it explores the ability to make inferences, lexical competence, vocabulary knowledge, strategic processing, and metacognition.Participants are asked to read two passages of approximately 25 lines, one of a narrative type and one of an informative type, and then to answer 15 questions for each passage about what they had read.A raw score for each task is calculated by summing the correct responses, and the total raw score ranges from 0 to 30.The battery has demonstrated good internal consistency and validity [74,75].The Spearman-Brown split-half reliability coefficient in this study was .70 for the entire measure. The quality of student-teacher relationships and school bonding.The Italian adaptation of the Student-Teacher Relationship and School Bonding Questionnaire (STR-SB_Q; [23,76]) was used to assess students' self-reported perceptions of relationships with Italian language and mathematics subject teachers and bonds with school.Specifically, STR-SB_Q examines the perceived positive and negative-both affective and cognitive-experiences of warmth, trust, accessibility, and responsiveness of student-teacher relationships, along with the general perceptions of the overall school environment.The questionnaire is composed of 19 self-assessment items, which can be grouped into 3 subscales: affiliation with teacher (eight items; e.g., "I trust my teacher"), dissatisfaction with teacher (three items; e.g., "I feel angry at my teacher"), and bonds with school (eight items; e.g., "I feel safe at school").These constructs are also good proxy variables with reference to the relational dimensions postulated by extended attachment theory.Affiliation with teacher is close to the "closeness" construct, while dissatisfaction with teacher is an indirect indicator of a climate of potential "conflict".Bonds with school extends the concept of closeness to the whole school environment [54].Items were rated on a 4-point Likert-type scale, from never or almost never true (1) to almost always or always true (4).In the current study, Cronbach's alphas were .88,.66,.80 for affiliation with teacher, dissatisfaction with teacher, and bonds with school, respectively.Notwithstanding the values for dissatisfaction with teacher could appear low (< .70), it evidenced a sufficient level of internal consistency considering that this subscale comprises only three items (Cronbach's alpha is highly sensitive to the number of items).Furthermore, the average item-total correlations were .48,which was higher than the acceptable level of .30suggested by Nunnally and Bernstein [77], indicating that the different groups of items were measuring the construct in the same direction. Data analysis Descriptive statistics for the observed variables were initially calculated.Afterward, as also suggested by Murray and Greenberg [23], we conducted a cluster analysis based on the STR-SB_Q subscale scores to identify groups of students relative to their perceptions of their STR-SB profiles.We identified the appropriate number of clusters by hierarchical cluster analysis, using Ward's method based on the squared Euclidean distance [78] and examining solutions from two to four clusters.The a priori criteria used for choosing the final number included the theoretical meaningfulness of each cluster, parsimony, and explanatory power (see [79]; the cluster solution had to explain at least 26% of the variance in each of the STR-SB_Q dimensions; see [80]).Sequentially, we grouped the participants by K-means cluster analysis procedures, and standardised mean values of the STR-SB_Q grouping variables were illustrated.The validity of the final solution was checked via multivariate analysis of variance (MANOVA) on the three STR-SB_Q dimensions by cluster.We also tested the replicability of the solution by splitting the sample into two random halves and recomputing the cluster analyses for each subsample.Level of agreement was calculated using Cohen's [81] kappa. After reporting bivariate correlations among the observed variables of interest for each STR-SB profile group, we took a multigroup structural equation modelling (SEM) approach to test our hypotheses using Mplus 7 [82].First, we performed a confirmatory factor analysis (CFA) on the entire sample to test a measurement model including the latent variables of Gf (series, classification, matrices, and conditions as indicators), mathematics achievement (MA; written calculation, number knowledge, and mathematic reasoning as indicators), and reading comprehension (RC; scores for each of the two passages to be read as indicators).We permitted each indicator's factor loading on the hypothesized factor to be freely estimated while fixing at zero the cross-loadings.Factor covariances were allowed.To examine measurement invariance across the STR-SB profile groups, we conducted multigroup CFAs, sequentially introducing appropriate constraints to test different levels of invariance: equal factor structure constraints for configural invariance and equal factor loading constraints for metric invariance [83].Second, we estimated a multigroup SEM (M1) so that the pathways from Gf to MA and RC were freely estimated across the STR-SB profile groups.Gender (dummy coded: 0 = female; 1 = male) was controlled by allowing it to predict both MA and RC.M1 was compared with two other more restrictive models, one (M2) constraining the pathways from Gf to MA to be equal across profiles and the other (M3) constraining the pathways from Gf to RC. We evaluated the model fit according to the most popular and widely employed fit indices and their best associated cut-offs [84]: the chi-square (χ 2 ) with p-value > .05,CFI � .95,and RMSEA � .06.Significant differences among nested models (the more restrictive vs less restrictive) were evaluated by using of the following criteria: χ 2 difference (Δχ 2 ) significant at p < .05 and ΔCFI < -.005 [85]. Preliminary analyses Only a few missing values were found for the study variables (3%).We performed missingness analyses to explore if participants with missing data were systematically different from participants with complete data.Results supported the missing completely at random assumptions and, therefore, missing values were imputed at item level using a regression estimation function.Table 1 summarizes the descriptive statistics and shows how some observed variables were not normally distributed with skewness and kurtosis values >|1.00| [84].As multivariate non-normality was also evidenced (normalized Mardia's coefficient was 7.82, p < .001), the data were subsequently analysed using robust maximum-likelihood estimation methods in the context of SEM models.Bivariate correlations among STR-SB variables were: (a) r = -.60 between affiliation with teacher and dissatisfaction with teacher, (b) r = .64between affiliation with teacher and bonds with school, and (c) r = -.43 between dissatisfaction with teacher and bonds with school. The principal descriptive statistics and the correlation matrix are reported in Table 1. Cluster analysis Based on the a priori criteria, a two-cluster solution was retained as the most appropriate.Solutions with a greater number of clusters violated the principles of parsimony, explanatory power, and/or theoretical meaningfulness [79], including clusters that presented slight differences compared to the two clusters and that were scarcely interpretable.The first cluster (n = 100; 46% of the sample) consisted of students scoring higher on dissatisfaction with teacher and lower on affiliation with teacher and bonds with school.The second cluster (n = 119; 54% of the sample) was composed of adolescents who scored higher on affiliation with teacher and bonds with school and lower on dissatisfaction with teacher.Thus, we found, in sequence, groups with low and high perceived STR-SB quality (see Fig 1 for standardised means and Table 1 for descriptive statistics).Furthermore, the MANOVA of the grouping variables revealed a significant multivariate effect, Wilks' Lambda = .40,F (3, 215) = 106.90,p < .001,η 2 = .60,indicating that about 60% of the variability was accounted for by group differences between the two clusters.Also, subsequent univariate analyses of variance revealed that the two-cluster solution explained a good percentage of variance for each grouping variable (about 40% on average).Findings from the replicability analysis revealed a k = .83,indicating a good level of reliability and stability. Multiple-group SEM analysis Correlations between the main and control (gender) indicator variables, as well as between the main latent variables, by STR-SB profile group are displayed in Table 2.The measurement model fit the data well, χ 2 (24) = 28.53,p = .26,CFI = .980,RMSEA = .029,and fully metric measurement invariance across STR-SB profile groups was evidenced (see Table 3).Starting from the metric invariant model, we ran the model M1 (pathways from Gf to MA and RC free to be estimated), showing good fit, χ 2 (74) = 81.42,p = .26,CFI = .971,RMSEA = .030.When comparing it with the more restrictive M2 (pathways from Gf to MA constrained to be equal), we obtained a significantly worse fit for M2, Δχ 2 (1) = 6.00, p = .01,ΔCFI = -.020.There was no significant change in the model fit when comparing M1 and M3 (pathways from Gf to RC constrained to be equal), Δχ 2 (1) = 0.12, p = .73,ΔCFI = .004.This suggests that the association between Gf and MA, but not between Gf and RC, was moderated by the STR-SB quality profile as shown in Fig 2 , representing the final estimated model, M3.In particular, the positive relation between Gf and MA was significantly stronger in nature for the high STR-SB quality group than for the low group.Instead, the positive association between Gf and RC was not significantly different between the two profiles groups. Discussion The present study investigated the role played by the relationship of students with teachers and schools in promoting academic achievement.Specifically, our aim was to test whether STR-SB moderated the relation between Gf and mathematics achievement and reading comprehension among younger adolescent students.We hypothesised that students with higher STR-SB quality showed stronger links between Gf and mathematics/reading than students with lower quality.Generally, we found significant or close-to-significant associations of Gf with mathematics achievement and reading comprehension after controlling for gender.This result is consistent with previous studies documenting significant positive relations between Gf and academic attainment [3].In our sample, on average, such association was moderate in magnitude, suggesting that Gf is operationally distinct from mathematics/reading achievement, in accordance with the Cattell-Horn-Carrol theory [37][38][39], and a useful conceptual tool for understanding academic achievement processes [86].Thus, during the first year of middle school, when novel situations and academic tasks of asked of younger adolescent students, the deliberate use of Gf mental operations (e.g., making inferences or classifications, producing and testing hypotheses, and solving problems; [87]) may be an important correlate of academic performance. However, our study went beyond previous research, showing that the relation between Gf and academic success may be moderated by STR-SB quality, depending on a domain-specific achievement perspective.Our data showed a stronger link between Gf and mathematics achievement but not with reading comprehension in the high-STR-SB-quality group compared to the low one.Thus, our moderation hypothesis was only partially supported. We suggest that one mechanism in action explaining why a high student-teacher relationship enhances the relation between Gf and school achievement might be the so-called principle of investment [37], according to which the degree of investment of fluid intelligence depends also on the variety of opportunities and programs affordable in the learning environment, which promotes the acquisition of knowledge and skills, and the development of academic achievement.With this respect, we suggest that students experiencing intimate relationships with the teachers and feeling closely bonded to the school, might not only have increased opportunities to learn but might use such opportunities more effectively, thanks to their motivation, self-regulation and ability to regulate themselves at a metacognitive level during the learning process [54,88,89].As such, they invest Gf more effectively compared to their counterpart with low STR-SB, leading to higher school achievement.More in general, the moderation effect we found may also be interpreted as the result of a successful or failed alignment process [42] between students' cognitive resources (e.g., Gf) and STR-SB quality.As suggested by Vandenbroucke et al. [90] in a recent meta-analytic work including 2-to 12-year-old children, for students involved in positive interactions and bonds with teachers and school, this alignment mechanism may promote an increased exploratory and engagement capacity.Such students seem to feel more self-confident and safer in their environment and are able to face and persist even when faced with somewhat difficult tasks.These stimuli and experiences represent challenges that favour the formation of a new and more complex use of cognitive abilities in the school context.From this point of view, also, and perhaps particularly, Gf could develop better and more as a result of these stimulating activities, with consequences also on school attainment.However, the authors also suggested a second potential mechanism related to lower levels of stress when relationships and bonds with teachers and school are positive, as shown by lower salivary cortisol levels in students.This would allow to keep the stress levels in the average range (levels that are too high or too low are usually worse conditions), ensuring adequate cognitive and academic performance.Our findings are in line with both these two explanations and allow for extending them up to early adolescence.In addition, both Blair [91] and Pekrun et al. [92] posited that negative emotions, like anger, reduce achievement partly because they negatively affect higher-order cognitive processes (such as problem-solving, memory, and strategic thinking) and focus attention on a narrow set of behavioural options [93].There is substantial evidence that cognitive processes are strongly related to achievement; thus, evidence that negative emotions probably experienced in low-quality relationships are linked to these processes is consistent with the notion of moderation.In fact, relationship-related anxiety and anger may disrupt both students' ability to recall relevant material and academic success [94-97; for a meta-analysis, see 98].As Blair [91] noted, young individuals characterised by negative emotionality are likely to have a hard time applying higher-order cognitive processes simply because their emotional responses do not call for reflective planning and problem-solving, so these skills are underused and underdeveloped.When a student's experience of relationship-related negative emotion leads to focusing on the object of the emotion (as when a child ruminates on the morning's event that resulted in their anger), cognitive resources are diverted away from educational materials to events or circumstances that distract from learning.In this way, low-quality relationships resulting in negative emotions could interfere with scholastic activities by reducing resources needed to integrate and recall important details.This seems particularly valid for mathematics tasks that usually need more cognitive resources because they are mostly linked to the abstract domain and furthermore are not as common in everyday life as verbal comprehension and reading tasks. Nevertheless, this process seems to be dependent on the specific domain of achievement; namely, it seems to characterise mathematics achievement but not reading comprehension achievement.In formulating a possible interpretation of this result, we considered the relevance of non-cognitive variables, such as the emotional-motivational ones widely discussed in the literature in reference to academic achievement in mathematics.In scientific subjects, non-cognitive measures could play an important role in performance by interfering or facilitating the deployment of more domain-general cognitive resources, such as attentional resources, working memory, or efficiency of executive functions [99,100].These aspects certainly require further study, as the relations between these constructs are extremely complex.Another possible explanation is that individuals are not only more exposed to reading from childhood than mathematics [101], but they also receive more support for learning to read outside of school, for example through the explanations given by parents or grandparents in everyday situations [19,51,52].In line with Vygotsky's theory [43], these social influences may be equally important to promote the potential of youth compared to those provided by teachers at school.Thus, it might be that students at middle school developed or are developing their ability to use cognitive functions (as Gf) in the process of reading comprehension during activities with different significant adults.This could dampen the impact of the moderating effect of STR-SB quality on the association between Gf and reading comprehension.A further plausible explanation is linked to the specific reference context of the research.Historically, Italy is a country where much relevance has been given to verbal-linguistic rather than logical mathematical subjects, with the consequence that the idea that mathematics is a difficult and complicated discipline to learn is widespread [102].In this context, a positive STR-SB might have a buffering effect by helping students to be more aware that they can make the most of their resources even when studying mathematics [103].Lastly, we cannot exclude that methodological choices might have impacted the differential moderating effect of STR-SB.The result could in fact depend on the task selected for the evaluation of comprehension which is an ability that has been shown to rely more on non-language-specific abilities [104], and, therefore, might be scarcely susceptible to be influenced by teacher's intervention.Probably the selection of a single word reading task or reading text task could be more linked to a direct intervention of the teacher, as well as being linked to phonological skills which are usually a target of the teacher's intervention [105]. Limitations and implications There are several limitations to be noted when interpreting our results.First, our sample came from a single school because of the need to prospectively follow the participants longitudinally in the simplest and most precise way.Moreover, its size was quite limited, and it was ethnically homogenous.This might limit the generalizability of the results.Second, we assessed the quality of the relationships only by asking students to consider the Italian language and mathematics subject teachers.However, these general evaluations of relationships with teachers may be biased because the subject (liked or not, for example) may influence the students' assessment of the teachers.Also, there are many studies describing how teacher preference may influence, and be influenced by, students' relationships.Further investigations should take this into account to better clarified this potential confounding factor.Third, we did not assess the teachers' perspectives of the STR-SB quality due to limits imposed by the times and commitments provided by the school.Indeed, it would be very useful to have multi-perspective information on STR-SB quality to better understand how relational factors can influence both general cognitive resources and academic performance [106].Fourth, we focused on just one possible cognitive correlate, namely Gf, and one possible non-cognitive correlate, namely STR-SB, of academic achievement.These are certainly crucial constructs to be considered, but additional factors deserve consideration, including, for example, evaluation of the quality of relationships with teachers of specific subjects (and not with teachers in general).Fifth, the cross-sectional nature of this study precludes us from clearly drawing conclusions about the direction or the reciprocity [3] of the associations between Gf and mathematics/reading achievements as well as the stability of the moderating role of STR-SB quality.In line with this, other alternative models to be compared to our proposed model might be hypothesized, for example one whereby academic achievement affects relationships within the school [54,107].Longitudinal data could draw clearer conclusions about both the associations between the studied variables and the developmental processes involved. Despite these shortcomings, our findings have relevant implications.They indicate the importance of student-teacher relationships and bonds with schools relative to specific domains of academic achievement (i.e., mathematics), which, in turn, might be associated with more general positive youth development outside of school [60,103].This encourages the creation and implementation of interventions aimed at supporting high STR-SB quality to stimulate school success.In this line, for example, a number of recent teacher professional development programmes have shown a positive impact on STR-SB [108,109].Moreover, our results could help explain and improve the mixed outcomes of cognitive training, with most studies reporting no effects from training on Gf or academic achievements [110,111].One explanation is that the relationship between trainers and students is a fundamental dimension for training to be effective.Ensuring that this relationship is qualitatively positive may lead to conditions for a better expression of students' personal resources, both in terms of general cognitive processes and academic performance.Furthermore, another implication is that schools should strive to improve the socio-emotional climate of classes, and to do so, they could systematically introduce measures of the STR-SB from multiple points of view.This could favour an evaluation of the relational processes within the class, the implementation of corrective interventions when the relationships are on the negative side, and, in the long term, an improvement in school performance among students. Fig 1 . Fig 1. Mean Z-scores for affiliation with teacher, dissatisfaction with teacher, and bonds with school by the two STR quality profiles.STR = studentteacher relationships and school bonding.https://doi.org/10.1371/journal.pone.0290677.g001 Fig 2 . Fig 2. Final estimated multiple-group model illustrating the moderating effect of the STR quality in the link of fluid intelligence with mathematics achievement (but not with reading comprehension).STR = student-teacher relationships.CFIT = Cattell's Culture Fair Intelligence Test.MA = Mathematics Achievement.RC = Reading Comprehension.Note.Standardized coefficients are shown.Solid lines represent significant, and dashed lines nonsignificant, pathways at p < .05.Coefficients in bold are significantly different across groups.Gender, as controlling variable, and related pathways are represented in light grey.Residuals are not shown for brevity.† p < .10,*p < .05*. https://doi.org/10.1371/journal.pone.0290677.g002 Table 2 . Correlations for key indicator and latent study variables, by STR-SB quality group. Lower diagonal: correlation matrices for data related to students in the low STR-SB quality group (n = 100).Upper diagonal: correlation matrices for data related to students in the high STR-SB quality group (n = 119).STR-SB = student-teacher relationships and school bonding.CFIT = Cattell's Culture Fair Intelligence Test.
8,892.8
2023-09-28T00:00:00.000
[ "Mathematics", "Psychology", "Education" ]
Reaction–Diffusion Model-Based Research on Formation Mechanism of Neuron Dendritic Spine Patterns The pattern abnormalities of dendritic spine, tiny protrusions on neuron dendrites, have been found related to multiple nervous system diseases, such as Parkinson's disease and schizophrenia. The determination of the factors affecting spine patterns is of vital importance to explore the pathogenesis of these diseases, and further, search the treatment method for them. Although the study of dendritic spines is a hot topic in neuroscience in recent years, there is still a lack of systematic study on the formation mechanism of its pattern. This paper provided a reinterpretation of reaction-diffusion model to simulate the formation process of dendritic spine, and further, study the factors affecting spine patterns. First, all four classic shapes of spines, mushroom-type, stubby-type, thin-type, and branched-type were reproduced using the model. We found that the consumption rate of substrates by the cytoskeleton is a key factor to regulate spine shape. Moreover, we found that the density of spines can be regulated by the amount of an exogenous activator and inhibitor, which is in accordance with the anatomical results found in hippocampal CA1 in SD rats with glioma. Further, we analyzed the inner mechanism of the above model parameters regulating the dendritic spine pattern through Turing instability analysis and drew a conclusion that an exogenous inhibitor and activator changes Turing wavelength through which to regulate spine densities. Finally, we discussed the deep regulation mechanisms of several reported regulators of dendritic spine shape and densities based on our simulation results. Our work might evoke attention to the mathematic model-based pathogenesis research for neuron diseases which are related to the dendritic spine pattern abnormalities and spark inspiration in the treatment research for these diseases. INTRODUCTION Dendritic spines are tiny protrusions on neuron dendrites which widely exist in the dendrites of higher animals and play an important role in the formation of most excitatory axodendritic synapses (Harris and Kater, 1994). The function of a spine is related to its shape (Kasai et al., 2003;Bourne and Harris, 2007). Traditionally, there are four basic shapes for dendritic spines: thin-type, stubby-type, mushroom-type, and branched-type (González-Tapia et al., 2016;Luczynski et al., 2016). Among them, thin dendritic spines show high plasticity and are related to learning, while mushroom dendritic spines show weak plasticity and are related to memory function. In addition, the density of spines directly influences the density of synapses. Researchers have found that pattern abnormalities of dendritic spine, especially the abnormal proportion of various types of dendritic spines and density variation of dendritic spines, were related to multiple nervous system diseases. For example, Pyronneau et al. reported an overabundance of thin-type spines, a kind of immature dendritic spines, in the somatosensory cortex of Fragile X syndrome model mice (Pyronneau et al., 2017). It has been reported that there are striatal dendrites with few dendritic spines in Parkinson's disease (McNeill et al., 1988). It was also been found that reduced dendritic spine density in individuals with schizophrenia (Glantz and Lewis, 2000;Sweet et al., 2008) and Huntington's disease (Richards et al., 2011). Also, it is recognized that dendritic spine loss is an early feature of Alzheimer's disease (Kommaddi et al., 2018;O'Neal et al., 2018). Thus, the exploration of shape and density factors of dendritic spines is of vital importance to understand the pathogenesis of these diseases, and further, search the treatment method for them. The current research on dendritic spines pattern is mainly performed by statically observing the cerebral cortex in animals (Kommaddi et al., 2018;Ratliff et al., 2019). It has been confirmed that the pattern of dendritic spines is influenced by neuron activity (Portera-Cailliau et al., 2003;González-Tapia et al., 2016) and some substances, such as drebrin (Hayashi et al., 1996), Rho GTPase Rac1 (Pyronneau et al., 2017) and F-actin (Kommaddi et al., 2018). The above researches usually only proposed one factor of dendritic spine patterns once while the pattern formation of dendritic spines is a dynamic process involving a variety of chemical reactions that are regulated by multiple factors. In summary, there is still a lack of systematic study on the mechanism of pattern formation showing influences of multiple factors on the formed pattern of dendritic spines. Mathematic modeling on dendritic spines development has become an important tool to study the structure and plasticity of dendritic spines in recent years. For example, Kasai et al. used the volume of dendritic spines as an index to measure the structure of dendritic spines and applied the Brownian motion model to simulate the volume of dendritic spines, exploring the close relationship between spine structure and function (Kasai et al., 2010). The Brownian motion model describes a random phenomenon, but the pattern formation of dendritic spines is a process regulated by gene and environment instead of a random process, making that model unsuitable for simulating the pattern formation. Besides, Miermans et al. simulated dendritic spine membranes during shape alternation using the Canham-Helfrich energy functional, which is used to describe the relationship between the bending rigidity of the membrane and the force generated by the cytoskeleton (Miermans et al., 2017). Their results demonstrate that the cytoskeleton is a key factor in determining the shape of dendritic spines, but this model lacks an explanation for the change in cytoskeletons, and their hypothesis about the approximate rotational symmetry of dendritic spines seems inapplicable to branched-type dendritic spines. Varner et al. explained the process of epithelial cell formation patterns using four mechanisms: cell division, cell insertion, cell deformation, and media filling (Varner and Nelson, 2014). However, these explanations cannot be applied in the study of sub-cellular structures such as dendritic spines. In Turing theory, if the chemical substances involved in the interaction have diffusion, the original equilibrium state will be broken, which is called Turing instability (Turing, 1952). The reaction-diffusion model (Gierer and Meinhardt, 1972;Meinhardt, 1976), based on Turing's theory, illuminates the reactions between chemical substances in developing biological systems. It has been utilized to simulate Pomacanthus skin stripe patterns (Kondo and Asai, 1995), vascular mesenchymal cells patterns (Garfinkel et al., 2004), mouse limb development (Miura et al., 2006), lung branching patterns (Guo et al., 2014a;Hagiwara et al., 2015), and self-organizing morphogenesis (Okuda et al., 2018;Landge et al., 2020). In our previous work, side branching and tip branching of the lung were investigated using the reaction-diffusion model, which was verified by spatiotemporal parameters (Guo et al., 2014a). However, the patterns developed in previous work were not enough to describe the complex patterns in dendritic spines. Because different from the obtained side branches which were equally spaced, the dendritic spines studied in this paper are usually uneven. In spite of its potential use in simulate branching patterns, the strong non-linearity of the reaction-diffusion model makes it difficult to intuitively draw the relationship between parameter values and simulation results, which is inconvenient for the analysis of the inner mechanism of the model. Addressing this problem, dispersion relation was used to analyze Turing instability (Guo et al., 2014b;Saleem and Ali, 2018) to prove the mathematical mechanism of the simulation results. In previous research, we have investigated the mathematic mechanism through Turing instability analysis and found that different Turing wavelengths are underlying the different patterns in a lung (Xu et al., 2017). However, the relationship between Turing wavelength and branch density has not been investigated yet. This paper reinterpreted the traditional reaction-diffusion model through the introduction of exogenous activator term and exogenous inhibitor term to simulate the formation process of dendritic spine, and further, study the factors affecting spine patterns. All four spine shapes, mushroom-type, stubby-type, thin-type, and branched-type, were reproduced using the model. Further, we found that the consumption rate of substrates by the cytoskeleton regulates the shape. Secondly, we found that the addition of an exogenous activator causes the spines to become denser, while the addition of an exogenous inhibitor causes the spines to become sparser, which provided a potential explanation for the anatomical results that spine decrease in hippocampal CA1 in SD rats with glioma. Finally, through Turing instability analysis, we found that Turing wavelength variation is the deep mathematical mechanism behind above parameters regulating spine density. Namely, the addition of an exogenous activator decreases the Turing wavelength, causing the density of the dendritic spines to increase, while the addition of an exogenous inhibitor increases the Turing wavelength, causing the density of the dendritic spines to decrease. Finally, the deep regulation mechanisms of several regulators of dendritic spine shape and density reported in other references were discussed based on our FIGURE 1 | Schematic of the development process. The neuron expresses activators and inhibitors. Activators gather at the tip, while inhibitors diffuse into the surrounding area due to a higher diffusion rate, making only the tip develops. This mechanism makes dendritic spines grow in a certain direction instead of exhibiting isotropous growth. simulation results. We hope that our work could evoke attention to the mathematic model-based research for neuron diseases related to the dendritic spine pattern abnormalities and spark inspiration in the treatment research for these diseases. Reaction-Diffusion Model The reaction-diffusion model is defined by Equation (1) (Meinhardt, 1976). It is a group of partial differential equations describing the reactions between activator A, inhibitor H, substrate S, and cytoskeleton Y. (1) The reaction-diffusion model illuminates the reactions between chemical substances in developing biological systems. According to this model, neurons express activators (at a rate ρ A ) and inhibitors (at a rate ρ H ). Activators behave with self-catalysis (at a rate c) and catalyze inhibitors (at a rate c), while inhibitors inhibit activators. Simultaneously, activators and inhibitors behave with degradation and diffusion (activators degrade at a rate µ and diffuse at a rate D A , whereas inhibitors degrade at a rate υ and diffuse at a rate D H ). High concentrations of activator accelerate the polymerization of cytoskeletons, inducing the development of dendritic spines. Because the diffusion rate of inhibitors is higher than that of activator, the polymerization of the cytoskeleton in the growth center is accelerated, and the polymerization of the cytoskeleton outside the growing center is inhibited. Thus, the dendritic spine grows in a certain direction, instead of displaying isotropous growth. The neuron creates substrate (at a rate c 0 ), while the cytoskeleton consumes substrate (at a rate ε). Substrate accelerates the catalysis of the activator. Similarly, the substrate behaves via degradation and diffusion (degrades at a rate γ and diffuses at a rate D S ), as well. Because the synthesis of the cytoskeleton consumes substrate, the peak concentration areas of activators and inhibitors, as well as the cytoskeleton, move in the direction of high substrate concentrations (Figure 1). The development patterns of dendritic spines are determined by the neuron activity (Bloodgood and Sabatini, 2005) and the exogenous substances. The neuron activity is described by The original state of the spine simulation is used to simulate a single spine in different conditions. Simulations were performed on a 100×100 grid. The grid size of space is 0.3. Fixed parameters in Equation (2): 1, and f = 10. (B) The first step in the dendrite simulation is used to simulate the dendrite trunk. Simulations were performed on a 150×200 grid. The grid size of the space is 0.3. Parameters in Equation (2): The second step in the dendrite simulation grows from (A) and is used to simulate spines in different conditions. Fixed parameters in Equation (2) the rate of substrate consumed by the cytoskeleton (ε) in our model. Exogenous substances include exogenous activators and exogenous inhibitors (Kommaddi et al., 2018). To describe the influence of exogenous substances, we added the exogenous activator term (δ A ) and the exogenous inhibitor term (δ H ) into the reaction-diffusion model: (2) The new model includes 16 parameters, most of which are fixed parameters, such as reaction-term parameter c, degradation-term parameters µ, υ, and γ , diffusion-term parameters D A , D H , and D S , and growth-term parameters d, e, and f. The values of fixed parameters are decided by the chemical characteristics of substances or cells, and the model has been proven to be robust to perturbations of fixed parameters (Murray, 1982). The other parameters are variable (ρ A , δ A , ρ H , δ H , c 0 , and ε), whose values depend on the condition of the development system. In this work, we studied the effect of the neuron activity and the exogenous substances on dendritic spines. Thus, we set parameters δ A , δ H , and ε in Equation (2) Numerical Simulation In this work, we investigated the factors of shape and density of spines using a reaction-diffusion model on different spatial scales. First, we simulated a spine to explore the influence of model parameters on the shape of the spine (Figure 2A). This simulation was performed on a 100×100 grid, and the original state was a 10×5 pixels rectangular area. Second, we simulated a dendrite with spines to explore the influence of model parameters on the density of spines (Figures 2B,C). This simulation was performed on a 150×200 grid, and the original state was a 5×10 pixels rectangular area ( Figure 2B). Then, a dendrite developed under certain conditions ( Figure 2C). Turing Instability Analysis Method To verify the simulation results with mathematics, we explored Turing patterns underlying dendritic spine patterns with our previously developed decoupling method (Guo et al., 2014b). The substrate and cytoskeleton are considered dependent variables of time and space, written as S(x, y, t) and Y(x, y, t). Then, we put these variables into Equation (2) as parameters and obtained the model of an activator-inhibitor system as: Branching is a system that can grow and form stable mode, which corresponds to the damped oscillation system of mathematical model. Some points in S-Y space correspond to damped oscillatory systems. The set of these points is called Turing instability space, and the wavelength of damped oscillation system is called Turing wavelength (Turing, 1952). According to its definition, the mathematical expression of Turing space can be calculated. The detailed derivation process is in our previous work (Xu et al., 2017). To explore dendritic spine development patterns according to Turing instability, a scheme was performed according to the following steps. • Choose an interesting point (the branching point in the branched spine or a random point on the central axis in others) in a simulation result and plot the S-Y curve of this point. • Calculate the Turing instability space using Equation (3). • Find the intersection of the S-Y curve and Turing instability space. • According to the form of the solution of Equation (2), we have and calculate the dispersion relation: • Record the maximum of the real part of the eigenvalue (λ m ) and corresponding wavenumbers (k m ). • Calculate Turing wavelength ( ) of the point in Step 1: We used Turing instability analysis to explore the difference of mathematical mechanism behind different patterns of dendritic spines in section Turing Instability Underlying Dendritic Spines. Anatomy of Hippocampal CA1 in SD Rat In this study, images from Golgi-Cox-stained brain slices from SD rats were compared with simulation results. Golgi-Cox staining was carried out with a commercial Golgi staining kit (Keyijiaxin, Tianjin, China). As soon as they were taken from the skulls, the brains were stored in Golgi-Cox staining solution in a dark place for 2 weeks, and the solution was replaced at intervals of 48 h. Then, brain slices were produced using a vibratome (VT 1000S, Leica, Germany) with a thickness of 150 µm. The slices were placed on slides covered with 2% gelatine. Next, the slices were dyed with ammonia for 60 min; washed with water three times; fixed with Kodak film for 30 min; and then washed with water, dehydrated, cleared, and mounted. Later, dendritic spines in the CA1 region of the hippocampus were imaged under the 100× objective lens with a digital camera. Dendritic trees were detected along CA1 tertiary dendrites derived from secondary dendrites, which started at their point on the primary dendrite. All animal experiments were approved by the Animal Research Ethics Committee, School of Medicine, Nankai University and were performed in accordance with the Animal Management Rules of the Ministry of Health of the People's Republic of China. Dendritic Spine Shape Factors Research Based on Reaction-Diffusion Model There are four traditional types of dendritic spines: mushroomtype, stubby-type, thin-type, and branched-type (González-Tapia et al., 2016;Luczynski et al., 2016). In order to research the factors of dendritic spine shape, we firstly proposed a classification method of spine shape based on real spine microimages. Then, we classified a spine simulated by our reaction-diffusion model and found the change rule of dendritic spine shape in different conditions. Classification Method of Dendritic Spine Shape At present, the classification methods of dendritic spines shape are qualitative, expert experience-required. To study the shape of dendritic spines quantitatively, metrics to classify dendritic spines need to be determined. Given a branched-type dendritic spine is easy to identify, here we only propose a classification method for the three types of non-branched spines. First, we measured three geometric qualities of a dendritic spine, namely, the height (h), the extreme width of the head (w head ), and the extreme width of the neck (w neck ), as shown in Figures 3A-D. Then, based on these three values, we constructed two following dimensionless metrics: • Relative average width (RAW) measures the overall thickness of spines, defined as • Relative constriction width (RCW) measures the difference between the head width and the neck width, defined as We calculated the RAWs and RCWs of eight dendritic spines (including three mushroom-type spines, three stubby-type spines, and two thin-type spines, shown in Figure 3E). Thin-type spines have a thin head and neck, so the value of RAW is small. Both the head and neck of stubby-type spines are thick, and the head is thinner or slightly thicker than the neck, so for them, the value of RAW is usually large, and the value of RCW is small or even negative. For the mushroom-type spines, usually have a large head and a thin neck, their values of RAW and RCW are both large. Based on the above analysis, we set the metrics for three types of dendritic spines. As shown in Figure 3F, the shape differences among the three types of spines are obvious. We chose RAW = 0.4 and RCW = 0.25 as two criteria to classify the three types. Finally, we presented a flow chart to distinguish the shapes of dendritic spines ( Figure 3G). First, if the dendritic spine has a branching structure, it is recognized as a branched-type spine. Second, if the RAW value is lower than 0.4, it is regarded as a thin-type spine. Finally, if the RCW value is lower than 0.25, it is FIGURE 3 | Metrics of dendritic spine shape. (A-D) Three geometric qualities of dendritic spines, namely, the height (h), the extreme width of the head (w head ), and the extreme width of the neck (w neck ). (A) is a mushroom-type spine, (B) is a stubby-type spine, (C) is a thin-type spine, and (D) is a branched-type spine. For a branched-type spine, the extreme width of the head is meaningless. (E) Top: Golgi-Cox staining of brain slices from SD rat hippocampal CA1. Four types of spines emerge in the images. Bottom: We found nine dendritic spines (including three mushroom-type spines, three stubby-type spines, two thin-type spines, and one branched-type spine) in the above images. (F) Two-metrics distribution of mushroom-type, stubby-type, and thin-type spines. Three types can be classified by the criteria RAW = 0.4 and RCW = 0.25. (G) Flow chart of the classification method of dendritic spine shape. identified as a stubby-type spine. Otherwise, it is recognized as a mushroom-type spine. Consumption Rate of Substrate Dominates the Spine Shape Based on the Reaction-Diffusion Model In our previous simulation, the rate that substrates are consumed by cells has been shown to play an important role in the branching pattern (Xu et al., 2017). Thus, we assumed that the consumption rate of substrates, namely, the neuron activity, has an effect on the spine shape. To verify this assumption, we performed the following single-spine simulations. First, to investigate the influence of the consumption rate of substrates (ε) on the shape of dendritic spine, we adjusted the value of parameter ε in Equation (2). We varied the value of ε from 0.01 to 0.9, and part of the obtained results are shown in Figure 4A (with different amplification factors) (also FIGURE 4 | Influence of parameters on the shape of dendritic spines. (A) Some of the results of spine simulations for the condition of δ A = 0.01, δ H = 0.00005, 0.01 ≤ ε ≤ 0.9 (with different amplification factors). Arrows mark the newborn parts of spines growing from the original state in Figure 2C. With the enhancement of neuron activity, the values of RAW and RCW decrease, and all four shapes are obtained. The shape can be mushroom (0 < ε ≤ 0.02), stubby (0.02 < ε ≤ 0.04), thin (0.04 < ε ≤ 0.7) or branched (0.7 < ε ≤ 0.9) types. (B) The results of spine simulation in the condition of δ A = 0, 0.005, 0.01, 0.015, and 0.02, δ H = 0.00005, ε = 0.03, 0.05, 0.7, and 0.9. With the addition of an exogenous activator, the stubby-type spine becomes mushroom-type, and the thin-type spine becomes stubby-type. see Supplementary Videos 1-4, respectively). As the value of ε increases, both RAW and RCW values decrease, and the dendritic spine shapes sequentially undergoes mushroom (0 < ε ≤ 0.02), stubby (0.02 < ε ≤ 0.04), thin (0.04 < ε ≤ 0.7), and branched (0.7 < ε ≤ 0.9) forms. All four dendritic spine shapes can be obtained with an increase in the consumption rate of substrates. This result indicated that neuron activity regulates the shape of dendritic spine. In addition, to investigate the influence of exogenous activator (δ A ) and exogenous inhibitor (δ H ) on the shape of dendritic spine, we adjusted the value of parameter δ A and δ H in Equation (2), respectively. We varied the values of δ A under the conditions of ε = 0.03, 0.05, 0.7, and 0.9 and the values of δ H under the conditions of ε = 0.01, 0.03, 0.07, and 0.7, and the results are shown in Figures 4B,C (also see Supplementary Figures 3, 4, respectively). According to the results, we found that a stubbytype spine transforms to mushroom-type and a thin-type spine transforms to stubby-type with an increase in δ A ; additionally, a branched-type spine becomes thin-type with an increase in δ H . However, there is no effect of δ A on branched-type spines and no effect of δ H on mushroom-type and stubby-type spines. These results indicated that both δ A and δ H also regulate the spine shape but they are not dominating factors compared to the consumption rate of substrates. Therefore, dendritic spines sequentially undergo in-turn transformation of mushroom-type, stubby-type, thin-type, and branched-type, with an increase in the consumption rate of substrates. In contrast, exogenous activators affect non-branched dendritic spines, and exogenous inhibitors affect branched dendritic spines. Thus, the consumption rate of substrates (neuron activity) determines the shape of dendritic spines. Dendritic Spine Densities Factors Research Based on Reaction-Diffusion Model Dendritic spines participate in the formation of most excitatory axodendritic synapses, so the density of spines directly influences the density of synapses. In order to research the factors of dendritic spine density, we simulated a dendrite with spines using The results of dendrite simulation under the conditions of δ A = 0.01, δ H = 0.00005, ε = 0.5, 1, 1.5, 2, and 2.5. With increased neuron activity, the shape varies from non-branched-dominant to branched-dominant. Branched-type spines take up more space; thus, the growth of surrounding spines is inhibited. In addition, the parameter ε has no effect on the density; for example, the densities are the same under the conditions of ε = 2 and ε = 2.5. In (A-C), fixed parameters in Equation (2) the reaction-diffusion model and found the relationship between dendritic spine density and key factors. Moreover, we observed the decrease of spine density in the hippocampal CA1 in rats with glioma and proposed a potential reason for this phenomenon by comparing the simulation results and observation results. Further, we used Turing instability to explain the mathematical mechanism behind the above parameters regulating spine density and found that an exogenous inhibitor and activator changes Turing wavelength through which to regulate spine densities. Exogenous Substances Regulate Spine Density To investigate the factors of dendritic spine density, we next simulated different spine densities which seen across multiple spines through dendrite simulations. In our previous research, we found that the rates of activator and inhibitor secretion from cells have been shown to play an important role in the density of side branching (Guo et al., 2014a). Similarly, it is reasonable for us to assume that exogenous activator and inhibitor are two key factors influencing the density of dendritic spines. Firstly, in order to find out the effect of exogenous activator and inhibitor on the spine density, we adjusted the values of the two parameters δ A and δ H based on standard values of δ A = 0.01, δ H = 0.00005, and ε = 1, and we obtained two groups of results (Figures 5A,B). The results showed the density of dendritic spines is positively correlated with δ A and negatively correlated with δ H . Next, we adjusted the values of the parameter ε to find whether the consumption rate of substrates is another factor of density, and the results are shown in Figure 5C (also see Supplementary Figures 1, 2, respectively). We noticed that the spine shape varied from non-branched-dominant to brancheddominant when ε varies from 0.5 to 2.0. Meanwhile, the spine densities have not significant changes when ε varies; for example, the densities are the same under the conditions of ε = 2 and ε = 2.5. Through dendrite simulations, we found that exogenous activators increase the density of spines, while exogenous inhibitors have the opposite effect. In comparison to exogenous substances, neuron activity has no significant effect on the density. Application in the Hippocampal CA1 of Rats The hippocampus plays an important role in memory function and cognitive abilities (Muller et al., 1996). Certain diseases, such as glioma, affect the developmental patterns of dendritic spines on hippocampal neurons. It has also been reported that the impairment of neurocognitive function is a common consequence of glioma, in both glioma patients (Wefel et al., (Wang et al., 2010;Hao et al., 2018). Through anatomy and neuron microimaging (see section Anatomy of Hippocampal CA1 in SD Rat for detail), we found that dendritic spines in rats with glioma were less dense (Figures 6A,B, also see Supplementary Videos 5, 6, respectively). To study the reasons for various densities in the rat hippocampal CA1, we compared the microscopic images of neurons with our simulation results. It seems that the spine patterns in the brains of the rat sham group were similar to those in the simulation results under the condition of δ A = 0.01, δ H = 0, ε = 1, while the spine patterns in the brains of the rat glioma group were similar to those in the simulation results under the condition of δ A = 0.01, δ H = 0.0001, ε = 1 ( Figure 5B). Thus, we considered that the addition of exogenous inhibitors is a potential reason for the decrease of dendritic spine density caused by glioma. Turing Instability Underlying Dendritic Spines Turing pointed out that the diffusion of chemical substances will break the original equilibrium state of substance concentration, which is called Turing instability (Turing, 1952). Branching patterns can only be generated from models with Turing instability. In order to qualitatively analyze the Turing instability, equilibrium position, and periodicity of the model solution, we have proposed a Turing instability analysis method using dispersion relation in previous research (Guo et al., 2014b) and found that the Turing wavelength is the internal factor causing the change of branching pattern of a lung (Xu et al., 2017) (see section Turing Instability Analysis Method for more details). Exogenous substances have an effect on the Turing instability space in which a stable pattern can appear, and have no effect on the S-Y curve that shows the concentration relationship between substrate and cytoskeleton during development, which can be derived from Equation (3). We adjusted the values of the parameters δ A from 0 to 0.2 and then drew an S-Y curve and Turing instability space in the S-Y space ( Figure 7A). Three intersection points of Turing instability spaces and the corresponding S-Y curves were marked with black points. These points were substituted into the Equations (4-6) in order to calculate the Turing wavelengths ( Figure 7B). An increase in parameter δ A decreases the Turing wavelength. Similarly, we found the intersection points and calculated Turing wavelength under the conditions of δ H = 0, 0.00005, and 0.0001 (Figures 7C,D). An increase in parameter δ H increases the Turing wavelength. As the Turing wavelength implies the spatial periodicity of spines, it is negatively correlated to the density of dendritic spines. In conclusion, exogenous activators make the Turing wavelength smaller and cause an increase in density of dendritic spines, while exogenous inhibitors increase the Turing wavelength and cause a decrease in the density of dendritic spines. DISCUSSION In recent years, various chemicals have been reported to be capable of regulating the process of dendritic spine development. Our research may explore their regulation mechanism in a mathematical view. For example, the actin filaments (F-actin) were considered to be key in regulating the shape of dendritic spines (Miermans et al., 2017). We found the cytoskeleton was one key factor to regulate cell morphology. Hence, Factin might be considered as the cytoskeleton (Y) in our model. It has been found that drebrin is an actin-binding protein in the dendritic spine, and its overexpression causes spine elongation (Hayashi and Shirao, 1999;Koganezawa et al., 2017;Hanamura et al., 2018). Bernstein reported that cofilin severs F-actin, contributing to actin dynamics (Bernstein and Bamburg, 2010). In addition, Calabrese suggested that dendritic spine growth correlates with decreased cofilin activity (Calabrese et al., 2014). According to our simulation results, drebrin and cofilin are similar to the functions of the activator (A) and the inhibitor (H) in our model respectively. Adenosinetriphosphate (ATP) is closely related to F-actin polymerization and depolymerization (Katkar et al., 2018;Merino et al., 2018), which implies that ATP may correspond to the substrate (S) in our model. Based on these hypotheses, we described our inferences as follows: (1) the overexpression of drebrin promotes the binding of F-actin and increases the density of dendritic spines, (2) the overexpression of cofilin hinders the binding of F-actin and decreases the density of dendritic spines, (3) the increase in ATP consumption during the process of creating Factin results in a different F-actin pattern and causes spines to become mushroom-type, stubby-type, thin-type and branchedtype, in turn. The verification experiments of morphogens is helpful to the correction of model parameters and the support of the conclusion in this work. Here, we proposed two ideas to verify the morphogens mentioned above: (1) research on the quantitative relationship between spine density and the addition of a substance that influences the expression of drebrin or cofilin, and (2) research on the quantitative relationship between spine shape distribution and ATP consumption during the process of creating F-actin. Moreover, in order to compare the spatiotemporal parameters between simulations and verification experiments quantitatively, 3D simulation is necessary. With our method, certain diseases could be systematically investigated at the level of chemical reactions. For example, the anomalous rise of rho GTPase Rac1 activity inhibited cofilin in mice with Fragile X syndrome because of a trinucleotide expansion in the FMR1 gene on the X chromosome (Pyronneau et al., 2017). In our model, the decrease in δ H decreases the concentration of the inhibitor (H), which results in dense dendritic spines. In another study, the intrathecal administration of latrunculin A, an actin-depolymerizing agent, in mice resulted in a decrease in F-actin levels and symptoms of Alzheimer's disease. Conversely, the intrathecal administration of jasplakinolide, a molecule that stabilizes F-actin, in mice restored F-actin levels and improved symptoms (Kommaddi et al., 2018). The effects of latrunculin A and jasplakinolide are similar to those of exogenous inhibitors and exogenous activators in this model, respectively. Exogenous activators promote the synthesis of the cytoskeleton, while exogenous inhibitors promote the decomposition of the cytoskeleton. In conclusion, we were devoted to revealing the mechanism of the development patterns of dendritic spines. The results show that the consumption rate of substrate dominates the shape, while the addition of exogenous activators and exogenous inhibitors dominates the density. Our work provided a potential explanation for the phenomenon that sparser spines in the brains of SD rats with glioma and maybe also explain some diseases reported in the literature, such as Fragile X syndrome and Alzheimer's disease. Our research provides novel and fresh insight into the development patterns of dendritic spines, helping search treatment methods for related diseases. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors. ETHICS STATEMENT The animal study was reviewed and approved by Animal Research Ethics Committee, School of Medicine, Nankai University. AUTHOR CONTRIBUTIONS YJ: conceptualization, methodology, software, formal analysis, investigation, writing-original draft, writing-review and editing, and visualization. QZ: conceptualization, methodology, writing-original draft, writing-review and editing, and funding acquisition. HY: methodology, validation, formal analysis, investigation, writing-original draft, writing-review and editing, and visualization. SG: methodology, software, formal analysis, data curation, and writing-review and editing. MS: validation, formal analysis, writing-review and editing, and funding acquisition. ZY: conceptualization, writing-review and editing, and supervision. XZ: conceptualization, writing-review and editing, supervision, project administration, and funding acquisition. All authors contributed to the article and approved the submitted version.
7,924
2021-06-14T00:00:00.000
[ "Biology" ]
On the Faithfulness for E-commerce Product Summarization In this work, we present a model to generate e-commerce product summaries. The consistency between the generated summary and the product attributes is an essential criterion for the ecommerce product summarization task. To enhance the consistency, first, we encode the product attribute table to guide the process of summary generation. Second, we identify the attribute words from the vocabulary, and we constrain these attribute words can be presented in the summaries only through copying from the source, i.e., the attribute words not in the source cannot be generated. We construct a Chinese e-commerce product summarization dataset, and the experimental results on this dataset demonstrate that our models significantly improve the faithfulness. Introduction The fast growing of the e-commerce market makes information overload, which hinders users finding the products they need and slows e-commerce platform upgrading the marketing policies. Product summarization technique facilitates user and e-commerce platform with text that contains the most valuable information about products, which is of practical value to address the problem of information overload. In the e-commerce scenarios, unfaithful product summaries that are inconsistent with the corresponding product attributes, e.g., generating "cotton" for a "silk dress", mislead the users and decrease public credibility of the e-commerce platform. Thus, the faithfulness is a basic requirement for product summarization. Recently, sequence-to-sequence (seq2seq) methods show promising performances for general text summarization tasks (Rush et al., 2015;Chopra et al., 2016;Zhou et al., 2017;Li et al., 2020b), which has been adopted to text generation tasks in the field of ecommerce (Khatri et al., 2018;Daultani et al., 2019;Chen et al., 2019;Li et al., 2020a). Although applicable, they do not attempt to improve the faithfulness for product summarization. In this paper, we aim to produce faithful product summaries with heterogeneous data, i.e., textual product description and product attribute table, as shown in Figure 1. First, as the words in product attribute tables are explicit indicators for the product attributes, we propose a dual-copy mechanism that can selectively copy the tokens in textual product descriptions and product attribute words into the summaries. Second, for the product attribute words, we only allow them appearing in the summaries through copying from the source. In this way, the attribute words not belonging to a certain product cannot be presented in the corresponding summary. Thus, the generated summary will not contain incorrect attributes that contradict the product. Our main contributions are as follows: • We propose a e-commerce product summarizer that can copy the tokens both from textual product descriptions and the product attribute table. • We design an attribute-word-aware decoder that guarantees the attribute words can be presented in the summaries only through copying from the source. Figure 1: The framework of our model. • We construct a Chinese e-commerce product summarization dataset that contains approximately half a million product descriptions paired with summaries, and the experimental results on this dataset demonstrate the effectiveness of our model. Overview We start by defining the e-commerce product summarization task. The input is a textual product description and a product attribute table, and the output is a product summary. As shown in Figure 1, a product attribute table, a.k.a. product knowledge base (Shang et al., 2019), contains a wealth of attribute information for a product. To explore the guidance effect of the product attribute table for producing faithful attribute words, we propose a dual-copy mechanism that can selectively copy tokens from both the textual product description and the product attribute table. To guarantee the unfaithful attribute words not be presented in the summary, all the attribute words are only allowed to appear in the summary through copying. Figure 1, k 1,1 ="Display", k 1,2 ="mode", k 2,1 ="Motor", k 2,2 ="type", v 1,1 ="LED", v 1,2 ="digital", v 1,3 ="display", v 2,1 ="Variable, and v 2,2 ="frequency". Dual-Copy Mechanism A bidirectional LSTM encoder converts x, k, and v into hidden sequence h x , h k , and h v . Then, a unidirectional LSTM decoder generates the hidden sequence s as follows: where s t is a hidden state at timestep t, and c x t is the source sequence context vector that is generated by attention (Bahdanau et al., 2015) mechanism as follows: where α x t,i is the attention for i-th word in the source at timestep t. Similarly, we can get the attention over each attribute word α v t,i,j . We calculate the attribute attention and attribute context vector as follows: Our model is based on the pointer-generator network (PGNet) (Gu et al., 2016;See et al., 2017) that predicts words based on the probability distributions of the generator and the pointer (Vinyals et al., 2015). The generator produces vocabulary distribution P gen over a fixed target vocabulary as follows: The dual-pointer copy the word w from both the source sequence and attribute table. The copy distribution from the source sequence is obtained by the attention distribution over the source sequence: We adopt a coarse-to-fine attention (Liu et al., 2019) to calculate the final copy distribution from attribute word sequence with the guidance of attribute-level semantics. The overall copy distribution is a weighted sum of the source sequence copy distribution and attribute word copy distribution: The final distribution combines P gen and P copy : where λ t ∈ [0, 1] is the generation probability for timestep t: The loss function L is the average negative log likelihood of the ground-truth target word y t for each timestep t: Only-Copy Strategy for Attribute Words To avoid generating summaries inconsistent with the product attributes, we constrain the attribute words can be presented in the summary only through copying from the source, so that the wrong attribute words cannot be generated. To achieve this goal, for each attribute word y att , we set P gen (y att ) = in Equation 11, where = 1e-9. We design a heuristic method to collect the attribute words. We find through data analysis that a attribute word in the target is tend to be present in the source. Thus, for each source-target pair, we retrieve each target word in the source, and extract mismatched words as the general word candidates. To guarantee the precision of general word extraction, we regard Chinese words with character-level intersections as the matched words. We regard the target words that are almost never recognized as the general word candidates as the attribute words. The set of attribute words will be released along with our dataset. Dataset We collect the dataset from a mainstream Chinese e-commerce platform. Each sample is a (product textual description, product attribute table, product summary) triplet. The product summaries are generated by thousands of qualified experts, and the auditing groups of the e-commerce platform strictly verify the quality. We collect 361,158 and 92,886 samples for the categories of Home Appliances and Bags, respectively. For Home Appliances, we randomly select 10,000 samples as the validation set and test set. For Bags category, we randomly select 5,000 samples as the validation set and test set. The remaining samples are used as the training set. The average number of the Chinese characters for the source text and the summary are 325.54 and 79.24, respectively. The average count of product attribute is 13.87. Experimental Results We compare the following baselines. Lead is a simple baseline that takes the first 79 characters of the input as the summary. Seq2seq model is a standard attention-based seq2seq model without copy mechanism. PGNet is the Pointer-Generator Network. C-AttriTable denotes concatenate the source text and attribute words in attribute table as the input. AttriTable denotes the dual-copy mechanism. AttriVocab denotes only-copy strategy for attribute words. Details about experiment settings can be found in our code. Table 1 shows the results for different models. Generally, the PGNet with attribute tables and the only-copy strategy for attribute words achieves the highest ROUGE score. Although "AttriTable" and "AttriVocab" aim to improve the faithfulness, they exhibit a acceptable performance for the ROUGE score. Considering that ROUGE evaluations are criticized for its poor correlation with human judgment especially for evaluating correctness of the generated summaries (Novikova et al., 2017;Kahn Jr et al., 2009), we perform a human evaluation towards faithfulness and readability. Human Evaluations Three expert annotators are involved to evaluate the faithfulness with 0 or 1 and the readability ranging from 1 to 5 (5 is the best) for 100 instances sampled from the test set. From Table 2, we can find that only 64.33% of the summaries are faithful to the source for PGNet, illustrating that faithfulness is an urgent problem for e-commerce product summarization task. "AttriTable" and "AttriVocab" strategies solve the unfaithfulness to a large extend. For the readability, all models obtain comparative results, and we can conclude that our "AttriTable" and "AttriVocab" strategies do not influence the fluency. The statistical significance test using a two-tailed paired t-test gets p-value<0.01. Conclusion We present an e-commerce product summarization model that aims to improve the consistency between the generated summary and the product attributes. We propose two strategies. First, we introduce a dual-copy mechanism that can selectively copy words from both the textual product descriptions and the product attribute table, which makes our model inclined to produce faithful attribute words existing in the product attribute table. Second, we design a heuristic method to recognize the attribute words, and unfaithful attribute words are not allowed to be presented in the summaries through generating from the target vocabulary. We construct a large-scale Chinese e-commerce product summarization dataset, and our dataset and code are available 1 . Acknowledgments This work is partially supported by National Key R&D Program of China (2018YFB2100802).
2,240
2020-12-01T00:00:00.000
[ "Computer Science" ]
The Formation of Educational Programs in the Digital Environment All the components of the university basic educational program and its formation in one system with the choice of the formation unit – academic discipline are presented at the modeling level. A model, which is a unique system for the formation of educational programs and their components, is proposed. It allows to increase the mobility of all university departments that implement educational programs during licensing, accreditation and other examinations. Introduction In the digital economy it is necessary to change the approach to education management. The educational process should be not only flexible, but also mobile in accordance with modern requirements during its' formation [1,2,3]. Currently, all educational organizations are developing regulations or local normative acts that govern the design process of educational programs [4,5]. The design of educational programs is necessary for solving two groups of problems in universities: 1) maintaining the quality of education at a high level for the training of highly qualified personnel; 2) ensuring accreditation and licensing indicators at the highest quality level. Most of the works [6,7,8] discuss the theoretical and methodological foundations of training highly qualified personnel for the modern Russian economy, related to the formation of the learning process as an electronic open socio-pedagogical system. The issues of educational structures integration in the higher education system are considered, innovative processes, which reflect the features of the formation of the higher education system programs, including ones with the advanced character, practice-oriented and differentiated approaches, are determined [9,10]. Some works are devoted to the design of educational programs in general, but does not affect the methods of their formation in an automated digital environment [11]. However, it is currently impossible to imagine the sustainable development of a modern university without automation of key areas of higher educational institution activity [12]. The introduction of automated information systems for the educational process controlling in Russia became widespread with the introduction of federal state educational standards of the 3rd generation. The software developers in this area offer their own methods and means of the educational process controlling. A review and selection of the most effective and appropriate education controlling systems for a particular university is presented in [13,14]. The study of the possibilities of using automated systems specifically for the educational programs design is the main difference of the proposed research from previous ones, presented in works. The project methodological basis consists from pedagogical and scientific principles and approaches: systemic, active, competency-based, personality-oriented approaches in the carrying out educational activities and the implementation of the educational process; principles of continuity, variability, integration in the MEP formation and design. The development of a modern university and the automation of educational activities are inextricably linked with each other in the digital era [15]. A wide variety and complex dynamically changing connections of business processes implemented in higher education institutions determine the functional and structural features of the university automation systems [16]. On the one hand, each department solves its characteristic tasks. This requires the creation of specialized software products for these departments. On the other hand, it is necessary to realize the interaction of various subsystems among themselves. A logical solution to this problem is the introduction of a unified automated university information system with one database for storing all information, which will provide the necessary flexibility and efficient exchange of information between subsystems thanks to the modular architecture. The actualization and implementation of such an integrated system makes it possible to increase the effectiveness of university management as a whole significantly. Methods and solutions There is a number of additional difficulties associated with the volume of generated documents while designing educational programs in large universities, such as BMSTU. Nowadays BMSTU implements about 500 educational programs, including 10,000 programs of disciplines and practices. It isn't possible to obtain high-quality educational programs with such a volume of data without the use of modern digital technologies. Thus, the goal has been set: to develop own model for the design of educational programs at BMSTU for increasing the efficiency of educational process management and, as a result, improving the quality of education through the use of modern digital technologies [17]. The developed model for educational programs designing allows to form the components of educational programs and their participants in one system, while ensuring the flexibility of educational trajectories, mobility of all the university departments and improving the quality of training [18]. The control of the educational program through a unique system of the formation of all its components allows to calculate the effectiveness of programs at all stages of implementation, identifying unprofitable elements at early stages with the possibility of their subsequent updating, and conducting multi-stage expert quality control of the prepared documents [19,20]. BMSTU is currently implementing 478 educational programs at all levels of higher education (Fig. 1). The educational program is a voluminous multi-page document (Fig. 3). And the requirements for the MEP implementation, which are prescribed in educational standards, supply the number of necessary documents for licensing and accreditation checks (Fig. 4). Results and discussion The first stage of modeling begins with filling out the "Library of Disciplines". It is created for each department. The flexibility of the system allows one to enter parameters and control them, in accordance with the standards approved by orders on the organization of students extracurricular independent work and point-rating system, as well as to improve the quantitative data of disciplines and practices to improve the training quality. The interface of each individual discipline or practice program includes data on their methodological support, forming a list of basic literature through the University electronic library system. In addition, the programs of disciplines and practices can be formed as a document in the subsystem "Library of Disciplines" and, before being approved and downloaded for students to access, undergo multi-stage control: according to formal parameters -the presence of all necessary sections; in content -the conclusion of the expert-specialist. All this is held in electronic form and is controlled in the "Electronic University" system. Thus, the generated documents quality is achieved. The subsystem "Curriculum" allows one to see the library of disciplines of all departments and to form curricula on their basis and obtain annual curricula automatically from them. The subsystem "Standards" is the controlling element in the preparation of curricula in accordance with the requirements of educational standards. This subsystem is multilevel and includes data from federal state educational standards (FSES) of all generations, including self-established educational standards (SEES). The system of calculating the training load of lecturers and the formation of staffing has been rethought. The subsystem "Calculation of the training load" automatically loads data from the curriculum, load standards, contingent and the curriculum additional module for the course's formation. One can vary the number of positions by changing the parameters of the discipline, load standards and the volume of course's training. The department training load consists of budgetary, extra-budgetary components, as well as the load associated with the training of foreign students. Additional modules of this subsystem allow one to create the department staffing and individual plans of lecturers. Since a unified identification system has been introduced in the library of disciplines, this allows to improve and optimize the preparation of class schedules in the subsystem "Class schedule of the educational process" in many ways. Thus, starting from a specific discipline, basic educational programs, including curricula, work programs of disciplines, annual curricula, are formed and the educational programs staffing and the schedule for conducting training sessions are determined from it. All these components are formed in a digital environment using the developed automated system. Conclusion An original model of the basic university educational programs design on the basis of a library of disciplines with multi-level quality control has been proposed. The implementation of the developed design model of the basic educational programs for the individual components of the educational process using the information system "Electronic University" has been presented. As a result of the work, a powerful tool has been obtained, which is adapted to the specifics of the educational processes design, including:  automation of obtaining key educational processes, as a unique and innovative solution that allows to increase the level of educational process control quality;  integration of all the data of the educational environment, contributing to the creation of a universal platform for communication of all its participants, in one place;  prompt data accessing in various types of presentation;  increasing the processes transparency;  acceleration of managerial decision making;  reporting on accreditation and licensing checks online;  implementation of an integrated approach to control the organization of the educational process, the formation of competencies and the construction of an individual trajectory by students due to the multifunctionality and mobility of the system.
2,045.4
2020-01-01T00:00:00.000
[ "Computer Science", "Education" ]
Role of Shear Stress on Renal Proximal Tubular Cells for Nephrotoxicity Assays Drug-induced nephrotoxicity causes huge morbidity and mortality at massive financial cost. The greatest burden of drug-induced acute kidney injury falls on the proximal tubular cells. To maintain their structure and function, renal proximal tubular cells need the shear stress from tubular fluid flow. Diverse techniques to reintroduce shear stress have been studied in a variety of proximal tubular like cell culture models. These studies often have limited replicates because of the huge cost of equipment and do not report all relevant parameters to allow reproduction and comparison of studies between labs. This review codifies the techniques used to reintroduce shear stress, the cell lines utilized, and the biological outcomes reported. Further, we propose a set of interventions to enhance future cell biology understanding of nephrotoxicity using cell culture models. e inability to accurately identify nephrotoxicity is a major issue for drug development. Nephrotoxicity is the commonest reason to prolong hospital stays in the United States and elsewhere [1]. Acute kidney injury commonly progresses to end-stage renal disease and the need for renal replacement therapy, e.g., dialysis or transplantation, with its substantial costs and morbidity [2][3][4]. Fourteen drugs were withdrawn from the market between 1990 and 2010 for nephrotoxicity that had not been detected with available screening strategies [5]. ere is currently no FDA-approved in vitro test for nephrotoxicity [6]. A major contributing factor is the lack of a readily available cellular target that is an accurate, representative, and physiologically relevant model of cells in the living kidney. Proximal tubule cells (PTC) are a prime candidate for an in vitro assay of nephrotoxicity. PTC are primarily responsible for the uptake and metabolism of drugs in the kidney [7][8][9] and are a target for damage from many commonly prescribed clinical drugs including aminoglycoside antibiotics, amphotericin B, radiocontrast media, immunoglobulins, and diverse antineoplastic agents [7,8]. PTC rapidly dedifferentiates under traditional 2D culture conditions, e.g., 96-well tissue culture plates, which severely limits the utility of this format for in vitro toxicity assays [10]. What PTC need is exposure to fluid shear stress. In vivo, PTC are exposed to fluid shear stress as the blood filtrate from the glomerulus flows past them en route to becoming urine. ey sense this shear stress and respond with structural and biochemical changes, changes that need to be maintained in the in vitro environment. Fukuda et al. found that human primary PTC exposed to fluid shear stress for 24 hours increased their expression of several drug transporters, including SLC37A2, SLS33A2, and SLC47A1 (also known at MATE2-K [11]. Xu et al. found that primary rat tubules maintained their express of P450 CYP1A1 for 12 days if exposed to shear fluid stress in a gyrorotatory culture [12]. Mollet et al. found that HK-2 cells exposed to fluid shear stress in a bioreactor, compared to static cultures, maintained their expression of multiple membrane transporter proteins for 21 days, including PEPT1 (SLC15A1), PEPT 2 (SLC15A2), OCT1 (SLC22A1), OAT3 (SLC22A8), gamma glutamyl transferase (gGT), and sodium-glucose cotransporter-2 (SGLT2) [13]. Unfortunately, none of these authors reported the intensity of the fluid shear stress that was applied to the PTC. e magnitude of the shear stress to which PTC are exposed is dependent on the quantity of filtrate flowing past, the viscosity of the filtrate, and the internal structure of the proximal tubule. e proximal tubule narrows as one moves distally, the length and density of their microvilli changes, and the composition of the fluid changes as the PTC reabsorb water, proteins, and other components. However, the flow in the initial portion of the proximal tubule can be estimated from the single-nephron glomerular filtration rate of 30 and 90 nL/min [14]. is would suggest that PTC in vivo are exposed to approximately 0.05-0.17 dynes/cm 2 of shear stress [15] which is much lower than the 5 to 100 dynes/cm 2 that endothelial cells encounter in the vascular system [16]. ese low levels of shear stress are challenging to reproduce in vitro in a uniform manner, particularly when they must be implemented in high throughput applications. In vitro studies with PTC have used a wide range of shear stresses applied for varying durations, making it impossible to compare results between laboratories. What is needed is a standard uniform method of applying shear stress in vitro that is simple and easy to implement. Two issues have limited studies on shear stress. First, the equipment is usually high-priced, which creates significant capitol barriers to experimentation [17][18][19]. Second, few techniques to reintroduce shear have thoroughly defined the parameters for reproduction by other labs [20]. is review seeks to categorize the known literature on reintroduction of shear stress on renal proximal tubule cell and the utility of suspension culture models which reintroduce shear to model renal damage. e current aim is to understand the amount of shear induced by different cell culture methods, the cell types utilized, and the outcomes assayed. ese insights allow us to recommend interventions in the field of drug-induced nephrotoxicity to move the field forward. We performed PubMed searches using the terms renal proximal tubular cells, suspension culture, bioreactor, proximal tubule, renal cell shear stress, nephrotoxicity, drug toxicity, acute tubular necrosis, and renal genomics. We identified 25 papers that used PTC (or PTC cell lines) and specified the intensity of the shear stress flow and the duration of the stimulus. Figure 1 compares the intensity and duration of shear stress applied to PTC in these 25 publications. e marker shape indicates the method utilized to generate the shear stress, most of which are microfluidics and parallel plate studies. Each study is referenced with a number, defined in Table 1. is graph gives a stark account of why the field has not come to a universal model for studies of nephrotoxicity: the miscellany of cell types, shear levels applied, and duration of exposure defies simple interpretation. Fluid shear stress induces structural changes in PTC ( Table 1). Reorganization of actin fibers and the cytoskeleton was frequently observed, particularly in experiments using higher intensities of fluid shear stress. e studies that applied higher intensities of shear stress are useful as elevated levels of shear stress on PTC have been implicated in the progression of renal disease [41][42][43][44][45]. Increased expression of microvilli in the presence of fluid shear stress was also noted by multiple authors (Table 1). is is critical, as the microvilli are the sensors for tubular flow and shear stress [23,29,31,46,47]. Of critical importance to the development of an in vitro nephrotoxicity assay, fluid shear stress also increases the quantity and/or activity of transporters that take up proteins and drugs (Table 1). Exposure to fluid shear stress causes PTC to express more megalin and cubilin, transporters that are central part of the proximal tubular uptake of albumin, many other proteins, and drugs [48][49][50]. Indeed, albumin transport increases when PTC are exposed to fluid shear stress (Table 1). Renal cells employ a variety of organic anion transporters (OATs) and organic cation transporters (OCTs) [51] in the uptake and secretion of drugs. Fluid shear stress has also been observed to upregulate many of these on PTC including MATE (SLC47A1), OCT2 (SLC22A2), P-gp (ABCB1 or MDR1), MAT2K (SLC472K), and MRP2/4 (ABCC2/4). Jang et al. noted that the P-gp efflux by human primary PTC exposed to 0.2 dynes/cm 2 shear stress in vitro was closer to that observed in vivo compared to PTC in static 2D cultures [27]. Figure 1: Varying conditions used to expose PTC to shear stress in vitro. e graph illustrates the varying intensity and duration of shear stress applied to cultured PTC from 25 reports in the literature. e y-axis is the intensity of shear force in dynes/cm 2 , and the x-axis is the duration of the exposure in hours. When a report used multiple of conditions, arrows indicate the range of intensities, and/or times and the marker is placed at the average value. Each publication is indicated with a number which corresponds to the citation in Table 1. e marker shape indicates the method used to apply the fluid shear stress: blue squares are parallel plates and microfluidics, red circles are rotating wall suspension culture, and green triangles are stirring bioreactors and orbital shakers. 2 Journal of Toxicology Journal of Toxicology With a specific focus on suspension culture and shear stress effects on renal proximal tubular cells, this review expands and enhances a specific segment of the Good Cell Culture Practice (GCCP) initiative [52] started by the former European Center for the Validation of Alternative Methods (ECVAM) [53]. e GCCP program tries to define standardized protocols to cultivate all relevant human tissues/ organs to test toxicity of newly developed drugs and chemicals. e diversity of shear stress levels and durations in the studies reviewed here emphasizes the need for systematic reporting of specific criteria in order to produce a knowledge base to support harmonized protocols. Our lab proposed this more than a decade ago, embodied in the Bonn criteria [20]. While there are developments on the way to generate harmonized protocols that should allow for prediction of nephrotoxicity during the preclinical phase of drug development [54][55][56], each methodology will need the kind of summary review presented here to allow useful progression of the initiatives. e duration of shear exposure and cell type have striking effects on cellular responses (Table 1). It is a conundrum to compare studies not only because of differing shear stress, duration, and cell types, but the various studies utilized diverse outcome measures. A few studies have examined different shear levels and demonstrated changes dependent on shear levels [21,29,31]. ere is scant, if any, data on the time course of changes in selected outcomes. Hence, study of changes in outcomes over time is one of our suggestions for future study. Only with harmonized protocols that define the shear stress applied on renal proximal tubules can many of the questions pivotal to predicting nephrotoxicity be answered: does shear induce certain specific patterns in gene expression? Is there an interdependency between the magnitude of shear stress and the expression of specific genes? How do the effects of cells exposed to shear stress or under microfluidic conditions compare? Can the change in phenotypic function of proximal tubular cells in culture towards an in vivo equivalent state be achieved by shear stress alone? Conclusions We propose three strategies to move the field towards a uniform model to test nephrotoxicity of drugs. First, in 2010, we proposed a minimum data set to be reported to allow reproduction of suspension culture studies in other labs [20]. As the meeting where the proposal was presented was in Bonn, we termed these the Bonn criteria [20]. ey include the vessel diameter, rotation speed, media viscosity, media density, cell/organoid/spheroid diameter, and density. e Bonn criteria remain critical to interpret data between labs and allow accurate experimental reproduction between labs. Second, if different labs continue to use different techniques and reagents including cell types, some tradeoff or bake-off studies will be indispensable to understand differences between approaches. Last, the development of a low capital, inexpensive, touse suspension culture technology would allow far more labs access to the technology and occasion the opportunity for studies to include more replicates and conditions costeffectively. e search for a uniform model to study nephrotoxicity is severely limited by the use of a multiplicity of methods and techniques, which cannot be simply compared. Laboratories use different cell types approximating renal proximal tubular cells and apply diverse shear stress methods, and there is no systematic and adequate reporting of culture parameters. e morbidity, mortality, and cost of drug induced acute renal injury should make an integrated cell biology approach to nephrotoxicity an urgent priority. Organic anion transporters OCT: Organic cation transporters. Parallel plate HK-2 (human) X * Initially reported as 1.0 dynes/cm 2 ; corrected in errata to 0.1 dynes/cm 2 . is table displays the effects of fluid shear stress on PTC in vitro. e data are abstracted from 25 publications that reported the amount of shear stress applied (dynes/cm 2 ) and the duration of the stimulus. When a range of intensities or exposure times was used, arrows indicate the range and the symbol is placed at the average amount. e PTC cell types used included primary cells (from human, mouse, or rat), MadinDarby Canine Kidney (MDCK) cell line, Human papilloma-transduced PTC (HK-2), SV40-transformed murine PTC, LLC-PK1 (pig kidney line), and OK (opossum kidney cell line). e reference numbers are the key for Figure 1. Data Availability All articles cited are freely available on PubMed and other academic media. Disclosure e content does not represent the views of the Department of Veterans Affairs or the United States of America. Conflicts of Interest e authors declare that they have no conflicts of interest.
2,977.8
2021-04-21T00:00:00.000
[ "Biology" ]
Scalable and Accurate ECG Simulation for Reaction-Diffusion Models of the Human Heart Realistic electrocardiogram (ECG) simulation with numerical models is important for research linking cellular and molecular physiology to clinically observable signals, and crucial for patient tailoring of numerical heart models. However, ECG simulation with a realistic torso model is computationally much harder than simulation of cardiac activity itself, so that many studies with sophisticated heart models have resorted to crude approximations of the ECG. This paper shows how the classical concept of electrocardiographic lead fields can be used for an ECG simulation method that matches the realism of modern heart models. The accuracy and resource requirements were compared to those of a full-torso solution for the potential and scaling was tested up to 14,336 cores with a heart model consisting of 11 million nodes. Reference ECGs were computed on a 3.3 billion-node heart-torso mesh at 0.2 mm resolution. The results show that the lead-field method is more efficient than a full-torso solution when the number of simulated samples is larger than the number of computed ECG leads. While the initial computation of the lead fields remains a hard and poorly scalable problem, the ECG computation itself scales almost perfectly and, even for several hundreds of ECG leads, takes much less time than the underlying simulation of cardiac activity. INTRODUCTION The electrocardiogram (ECG) is one of the most common tools in present-day medicine, yet its relation with the molecular biology of the heart is still poorly understood. The ECG witnesses the collective activity of about a million current-generating transmembrane proteins in each of the heart's muscle cells (Hille, 2001). Many of these proteins have been identified and their actions have been captured in mathematical models that predict their collective behavior on the scale of a cell (Noble and Rudy, 2001). By coupling millions of these membrane models one can create a model of whole-heart electrophysiology. Such models generate crucial insights in the functional effects of molecular-level changes, allowing for example to predict dangerous side effects of new drug designs (Passini et al., 2017) or to understand how cardiac ion-channel mutations influence cardiac rhythm disorders (Gima and Rudy, 2002). Moreover, from their results one can compute the corresponding ECG and predict how lab results on subcellular components would translate to everyday practice (Hoogendijk et al., 2010;Keller et al., 2012;Zemzemi et al., 2013). Such realistic models are large and, when run on a single processor, would take days to simulate just one heartbeat. Fortunately the problem can be expressed in such a way that the work may be spread over many processors with little communication between them. Therefore, these computations are said to scale very well, meaning that they run almost twice as fast every time the number of processors is doubled (Vázquez et al., 2011). This makes them suitable for use on large-scale parallel computers, allowing models to run in nearly real time (Niederer et al., 2011b;Richards et al., 2013). Simulation of a realistic ECG from the results of such a numerical heart model is much harder, because the electrical current generated by the heart meets a different conductivity at each point in the torso. As a result, each point influences the potential everywhere else, so to find the potential anywhere one must solve it everywhere at the same time. Numerically this means that a large system of linear equations must be solved, one for each point in the torso model. These problems are harder when they are larger and require frequent communication between the processors in a parallel computer. This means that they cannot be solved much faster by using more processors. Therefore, ECG computation is becoming a bottleneck, limiting both the speed and the spatial resolution of our models. To avoid this problem many researchers have used simplified torso models, resulting in a less accurate ECG. A solution that can avoid such a sacrifice is to simulate the ECG using an electrocardiographic concept named a lead field. This allows the problem to be split into a hard (poorly scaling) part and an easy (well scaling) part. The hard part is solved only once for each ECG lead, while the easy part is run repeatedly for each time step in a simulation and for multiple simulations on the same geometry. This approach has been used by several authors, but generally with simplified heart models (Pezzuto et al., 2017) or, again, with simplified torso models (Horacek, 1973;Miller and Geselowitz, 1978;Mailloux and Gulrajani, 1982;Aoki et al., 1987). The purpose of this paper is to show that a lead-field approach can greatly improve scalability in a high-performance computing (HPC) context without sacrificing accuracy. This is not obvious, because the method requires a large set of transfer coefficients (the lead field) to be stored between the two phases of the computation. The efficiency of the method depends on the accuracy with which the lead field must be computed and the degree to which it can be downsampled without affecting the accuracy of the ECG too much. Finally, to provide answers to these questions an accurate reference solution is needed. Using a reference solution computed on a full torso model at 0.2 mm resolution this study shows that the lead field can indeed be downsampled enough to achieve an efficient and scalable computation, providing roughly two orders of magnitude speedup with negligible loss in accuracy. The results of this study make it possible to build more realistic heart models with higher spatial resolution, without spending much more time to compute the ECG. Model Equations The methods in this study are based on the bidomain model of cardiac electrophysiology (Miller and Geselowitz, 1978;Tung, 1978), on which most of the current modeling work in this area is based (Niederer et al., 2011a;Henriquez, 2014). The bidomain model is a continuum approximation of the heart muscle, which in reality consists of a network of interconnected muscle cells embedded in an extracellular matrix and other structures such as fibroblasts and capillaries. The bidomain model approximates this as two co-located spaces: the intracellular domain, consisting of the interior of the cells and the gap junctions that connect them, and the extracellular domain, consisting of everything else. The two domains are characterized by conductivity tensors G i and G e , respectively. Their values at each point in the model depend on the fiber direction and account for the partial volume occupation of the two domains. In addition the parameters C m and β determine the capacitance of the cell membrane and the amount of membrane per unit volume, respectively. The state variables of the model are the potential fields φ i in the intracellular and φ e in the extracellular domain, and a set of variables y describing the state of the membrane model at each location. Using the auxiliary variable V m = φ i − φ e and agreeing that all variables are functions of time and position we can express the bidomain model compactly as where the term C m ∂ t V m represents the capacitive transmembrane current, the function I ion the density of ionic current flowing between the two domains, and F is a nonlinear vector-valued function describing how the membrane state evolves. The pair of functions I ion and F constitutes the membrane model. Suitable boundary conditions are on the boundary A of the cardiac muscle and G e ∇φ e · ∂ T = 0 (5) on the torso boundary T (Tung, 1978;Krassowska and Neu, 1994). The electrical activity of the heart can then be simulated by integrating Equations (1), (2), and (3) under the boundary conditions (4) and (5) (Vigmond et al., 2002). This is known as a bidomain reaction-diffusion model. In this study a simplified version, a "monodomain" reaction-diffusion model, was used. This model can be derived by assuming that G i and G e are proportional (Leon and Horácek, 1991). Although this is a gross simplification the effect of this assumption is negligible for most purposes if the model parameters are well chosen (Potse et al., 2006;Nielsen et al., 2007;Bishop and Plank, 2011;Coudière et al., 2014). The monodomain model reads The "monodomain conductivity tensor" G m was computed as the series conductivity of the two domains, With this choice the resistance encountered by a current loop through the cell membrane is the same as in a bidomain model, so that also the conduction velocity of a propagating activation wavefront is almost the same. An ECG potential V(t) at time t is the difference in φ e between two locations on the body surface or, more generally, a linear combination where c i are the relative contributions of the two or more electrodes and φ i e are the potentials at the corresponding positions. The coefficients c i must fulfill charge conservation, To compute φ e we must return to the bidomain model. Equations (1) and (2) can be combined and reorganized to yield This equation can be solved for φ e in the whole torso at once from a given distribution of V m . However, for the ECG we need to know φ e at a few locations only. Therefore, it can be more efficient to use a Green's function of the operator ∇ · ((G i + G e )∇.) for each of these locations. Since an ECG lead is a linear combination of φ e at two or more points it can also be represented directly by a linear combination of Green's functions. In electrocardiology such linear combinations of Green's functions are named lead fields (McFee and Johnston, 1953;Geselowitz, 1989;Colli-Franzone et al., 2000). A lead field is computed once for each ECG lead. It is then used to evaluate the ECG at each time step of the reaction-diffusion model and, as long as the conductivity parameters are not changed, can be re-used for multiple simulations. In terms of a lead field Z( x) the ECG potential V(t) at time t is where the integration is over the myocardium. In contrast to the solution of the full system (8) this calculation is simple and a priori highly scalable. The lead field can be computed as the potential field resulting from a unit current applied at the electrode locations x i (Geselowitz, 1989): where the coefficients c i are as in Equation (7) and δ is Dirac's delta function. To avoid a scaling factor in (9) the total injected current must be unitary, |c i | = 2. Model Geometry In order to run tests on a relevant geometry a model of the heart and torso was used that had been created for a previous study (Kania et al., 2017). The methods to build this geometry, only tersely described before, were as follows. High-resolution cardiac and thoracic computed tomography (CT) images were obtained from a female patient in her thirties. Images were segmented automatically using the MUSIC software (IHU Liryc, Université de Bordeaux and Inria Sophia Antipolis, France), under supervision of an expert operator. The boundaries of the segmented volumes were expressed as triangulated surfaces and meshing errors were manually corrected using Blender (The Blender Foundation, Amsterdam, The Netherlands). The resulting surface mesh defined the volumes of the ventricular myocardium, left and right cavities with parts of the great vessels, lungs, and the whole body. To define hexahedral meshes for the computations the surfaces were overlaid with a 3D cartesian mesh whose elements were assigned types according to the surfaces in which they were contained. The bones were also segmented and meshed but not included in the simulations. The atrial myocardium was not segmented. The heart mesh was processed to define subendocardial and subepicardial layers and fiber directions using the rule proposed by Beyar and Sideman (1984), as previously described (Potse et al., 2006). The torso mesh was similarly processed to define a layer of 1 cm thickness directly under the skin as skeletal muscle and to define a sheet direction in this layer. Since the true fiber directions of the skeletal muscle layer are too complex to account for the model muscle simply had a low conductivity in the radial direction and a high conductivity in all circumferential directions ( Table 1). During the thoracic scan the patient was wearing a vest with 252 embedded electrodes (Tilt et al., 2013;Cochet et al., 2014). The locations of these electrodes were extracted from the CT data using software provided by the manufacturer of the vest. In addition the locations of the 9 standard ECG electrodes were determined by referring to the bone mesh, and two electrode locations on the hips were chosen. The surface mesh with electrode positions is illustrated in Figure 1. Spatial Discretization Spatial discretization was done using a finite-difference method. Differential operators of the form ∇ · (G∇.), where G is any of the conductivity tensor fields employed, were computed using an TABLE 1 | Tissues used in the simulations together with the volumes they occupy in the torso model, the conductivity parameters σ (in mS/cm), and β (cm −1 ); the subscript "i" stands for intracellular, "e" for extracellular, "L" for longitudinal, "T" for transverse (within a tissue sheet), and "C" for across-sheet. Material Volume expression proposed by Saleheen and Ng (1997). This expression assumes that G is constant on elements and that potentials are defined on the nodes of the mesh. It produces a 19-point stencil that takes anisotropy and inhomogeneities into account. The simulation code read its geometry in terms of elements, and created a node mesh, assigning node types such that all corners of a myocardial element would have myocardial nodes. In order to treat myocardial boundaries correctly, the β value of each node was the average of those associated with the 8 elements around it, which was zero for non-myocardium (Potse et al., 2006). Simulation of Cardiac Activity To prepare input data for ECG simulation propagating activation was simulated using the monodomain reaction-diffusion model (6) using the membrane model of Ten Tusscher and Panfilov (2006) for the functions F and I ion . A uniform time step of 10 µs was used. At each time step the code 1. evaluated the diffusion current β −1 ∇ · G m ∇V m ), 2. communicated the diffusion current across domain boundaries, 3. integrated the membrane status variables y, 4. evaluated I ion (V m , y), and 5. integrated V m . After each 100 time steps results were written to file. Simulations were run on a heart mesh at 0.2 mm resolution. Tissue parameters determining G m and β are listed in Table 1. Gating variables were integrated with the method of Rush and Larsen (1978) and all other variables with a forward Euler method. Activation was started with a single stimulus at one location, at the beginning of the simulation. Seven simulations were run, each time with the stimulus at a different location. Simulations covered 500 ms to include the full depolarization and repolarization of the ventricles. ECG Simulation The ECG was computed with several methods: FSF, the fine-mesh full solution solved the full system (8) for given V m on a heart-torso mesh with 0.2 mm resolution. This was an exceptionally large computation requiring 3.3 · 10 9 mesh nodes and 12 TB memory. It was combined in a single run with the integration of the monodomain reaction-diffusion model (6). Solutions for φ e were computed after each 100 time steps. FSC, the coarse-mesh full solution solved an alternate form of Equation (8) on a heart-torso mesh with 1 mm resolution (Potse and Kuijpers, 2010). In this case the equation read where I w is a projection of the term ∇ · (G i ∇V m ) from a 0.2 mm resolution heart mesh onto a 1 mm resolution torso mesh. Each coarse-mesh node received contributions from a cube-shaped area including all fine-mesh nodes within the up to 8 coarse-mesh elements around it, with higher weights attributed to nearby nodes, as in a trilinear interpolation: Let x, y, z be the number of fine-mesh edges between a coarse-mesh node and a fine-mesh node along the x, y, and z axis, respectively. Then the contribution of the fine-mesh node to the coarse-mesh node was Frontiers in Physiology | www.frontiersin.org The coarse mesh was constructed such that a myocardial fine-mesh node was always surrounded by 8 coarse-mesh nodes. Therefore, w added up to unity for each fine-mesh node and charge conservation was ensured. For the FSC method the monodomain reaction-diffusion model (6) was integrated in a separate run which saved I w to file. This method has been used routinely in several studies (Nguyên et al., 2015;Meijborg et al., 2016;Duchateau et al., 2017;Kania et al., 2017). The torso mesh in this case consisted of 2.7 · 10 7 nodes. LF, the lead-field method evaluated the integral expression (9) in its discretized form. This took place during the reactiondiffusion simulation and on the same mesh, i.e., at 0.2 mm resolution, after each 100 time steps. Each component of ∇V m was evaluated on model elements as an average of the differentials along 4 edges of the element. The conductivity tensor G i was also evaluated on each element. For testing purposes the lead vector field ∇Z was evaluated at different resolutions. For this purpose the field was first downsampled by an external program, using a simple averaging of n × n × n elements, where n could be 2, 5, 10, or 25. LFS, the lead-field method with selective downsampling was identical to the lead-field method except that the downsampling program took the tissue types of the elements into account. If any of the fine-mesh elements inside a coarse-mesh element E had a myocardial type, only fine-mesh elements with myocardial type were used in the average for E. The idea behind this was that ∇Z undergoes abrupt changes at the myocardial boundaries, and that it is more accurate to mix in a contribution from another myocardial area than, for example, one from the lung. The notations LF(C, S) and LFS(C, S) will be used for the LF and LFS methods, respectively, with lead fields computed at a resolution of C millimeters and downsampled to a resolution of S millimeters. Computation of Lead Fields To prepare the lead fields Z for the ECG computation the system (10) was solved for each lead. This was done once with a torso model at 1 mm resolution and once with a torso model at 0.2 mm resolution. Like the FSF, the latter calculation was exceptionally large and was only intended to provide reference values, to test the hypothesis that 1 mm resolution suffices for such calculations. In either case 266 lead fields were computed: the 12 standard ECG leads, and one lead for each of the 252 vest electrodes and 2 hip electrodes referenced against Wilson's central terminal (the average of the two arm electrodes and the left leg electrode). The computed lead fields Z were stored in files. A dedicated program computed ∇Z and downsampled it using the two methods described in section 2.5, i.e., with and without consideration of the tissue types of the elements. The field computed at 0.2 mm resolution was downsampled by the factors 2, 5, 10, and 25 to obtain resolutions of 0.4, 1, 2, and 5 mm. The field computed at 1 mm resolution was downsampled by the factors 2 and 5 to obtain resolutions of 2 and 5 mm. Testing Protocol ECGs were simulated using each of the 4 methods described in section 2.5 and, for the methods based on lead fields, at each of the resolutions mentioned in section 2.6. The ECG potentials V were compared to a reference ECG V ref in terms of three measures: maximum, root-mean-square (RMS), and relative difference (RelDif) (van Oosterom, 2001;Tysler et al., 2007), defined as where the index t ranges over all 500 samples and the index n ranges over all 266 leads. For the 252 vest leads the dependence of the error values on the position of the positive electrode was investigated. The effect of the ECG computation on the run time of a reaction-diffusion model was investigated and the scalability of the 4 methods was investigated by running tests on 16, 32, . . . , 512 nodes of a Bull cluster. Each of these nodes was equipped with two 14-core Intel Xeon E5-2690 processors with 2.6 GHz clock frequency and 64 GB memory. Accuracy results are reported as averages over the 7 activation sequences. Performance tests were carried out 5 times to report average values and standard deviations of run time. Numerical Methods Simulations were performed using the Propag-5 software (Krause et al., 2012), to which new code was added to compute a lead field-based ECG on the fly during a simulation of the heart, and to facilitate the computation of the lead fields themselves. Like its predecessor Propag-4 (Potse et al., 2006), the software uses a structured mesh, but stores information only for elements and nodes that are relevant for the computation: only myocardium for a monodomain model, and only conducting material for a bidomain model. As discussed by Krause et al. (2012) Propag-5 uses a hybrid MPI/OpenMP parallellization scheme. Using a naive temporary partitioning of the domain the code reads the geometry in terms of elements and creates a node mesh using rules that ensure consistency with the scheme discussed in section 2.3. It then uses the ParMetis library to partition this mesh in parallel and creates a definitive domain partitioning for the computations. This fully parallel workflow allowed it to load and partition a mesh with over 3 billion nodes. Because in some of the computations the model size exceeded the maximum value of a signed 32-bit integer, Propag was compiled with a 64-bit integer type for global indices. The PetSC (Balay et al., 2017) and Parmetis libraries which Propag uses were compiled entirely with 64-bit integers because they do not have a distinct type for global indices. The linear systems (8), (10), and (11) were solved with a biCGStab solver (van der Vorst, 1992) with a BoomerAMG preconditioner from the Hypre package (Henson and Meier Yang, 2002;Falgout et al., 2017). The solver terminated when the norm of the error term was 10 −8 times smaller than the norm of the right-hand side. Multigrid preconditioners such as BoomerAMG are very powerful and well-suited for large bidomain problems (Sundnes et al., 2002;Weber dos Santos et al., 2004;Austin et al., 2006) so that the solver typically needs only a handful of iterations, in contrast to the problematic convergence observed on large models with an incomplete-LU preconditioner (Potse et al., 2006). RESULTS An example of a computed lead field is shown in Figure 2. This field was computed and stored at 1-mm resolution. The figure shows how the field suddenly changes direction and magnitude at lung boundaries. There is a slight left-right asymmetry because the highly conductive cardiac cavities concentrate the field on the left side of the thorax. The computed depolarization sequences of the 7 simulated heart beats that were used for ECG computation are shown in Figure 3. Potentials computed with a full-torso solution from beat 5 are shown in Figure 4. They are about 10 times larger in the myocardium than near the body surface. Lead-Field ECG Compared to Full Solution To establish that the lead-field and full solution methods produce the same results, simulated ECGs were compared between the LF(1, 1) and FSC methods. Averaged over the 7 simulations, RelDif was 0.0016, RMS error 0.3 µV, and maximum error 4.6 µV, while ECG amplitudes were in the order of 1 mV. Analogously, a single ECG was compared between the LF(0.2, 0.2) and FSF methods. In this case the differences were slightly smaller: RelDif was 0.0014, RMS error 0.2 µV, and maximum error 2.6 µV. Effect of Resolution To determine the effect of lead-field resolution on ECG accuracy, 7 different activation sequences were simulated with a monodomain reaction-diffusion model and ECGs were simulated on the fly using a lead field. This was done for the lead fields computed at 0.2 and at 1.0 mm and all downsamplings thereof, both with the LF and with the LFS method. The resulting ECGs were compared to a reference ECG. The results are shown in Figure 5. In Figure 5A errors are shown using the ECG computed with LF(0.2, 0.2) as the reference. For the fields subsampled from those computed at 0.2 mm resolution, differences are seen to increase roughly linearly with the stepsize of the lead field. The LFS method resulted in smaller differences. Results obtained with the field computed at 1.0 mm resolution and downsamplings differed from the reference solution with little dependence on the sampling level. Figure 5B shows that this dependency is recovered when ECGs computed with LF(1, 1) are used as the reference. The relatively large influence of the spatial stepsize in the leadfield computation suggests that differences in model geometry which are very similar to the differences between LF(1, 1) and LF(0.2, 1) in Figure 5A. To find out at which locations in the model the lead fields computed with LFS(0.2, 1) and LF(1, 1) differed, the L2 norm of the difference between the two vector fields was computed for all elements. Large differences were found to occur at locations where the fiber direction was highly variable. One such location, at the inferior septal junction, is illustrated in Figure 6. It is compared with a measure of variability in fiber direction in the underlying anatomy files, computed as Frontiers in Physiology | www.frontiersin.org where P is the fiber direction in the coarse-mesh element and p i are the fiber directions in the corresponding fine-mesh elements. The absolute value, denoted as |.|, was taken because the orientation of the direction vector is irrelevant. In Figure 7 a few ECG leads are compared between different computation methods. In Figure 7A full solutions at 0.2 and 1.0 mm are compared. At the coarser resolution the ECG appears more fractionated; this is particularly visible in lead III. As discussed above, the RelDif between these ECGs was 0.10. In Figure 7B the same full solution at 0.2 mm is compared with an ECG computed with LFS(0.2, 2). Despite the 10-fold downsampling of the lead field the traces are visually identical; the RelDif was 0.02. Thus, an ECG computed with a lead field downsampled to 2 mm resolution is more faithful than a full solution at 1 mm resolution, when compared to a solution at 0.2 mm. Table 2 shows how ECG computation with lead fields at different resolutions affects the run time of a typical simulation. The data in each row were obtained from 5 simulations of 500 ms activity with a reaction-diffusion model at 0.2 mm resolution, run on 32 compute nodes (896 cores). The table separates initialization time, ECG computation time, and simulation time (including ECG computation but excluding initialization). For lead fields at 0.2 and 0.4 mm resolution the initialization time is of the same order of magnitude as the simulation time, due to the time it takes to read the lead fields from file (141 and 53 GB in these cases). The time for ECG computation itself ranges between 4 and 5 % of the simulation time, slightly reducing with the leadfield resolution. At 1 mm resolution the memory accesses related to ∇Z (for 266 leads) are similar to those for G i ∇V m so a further reduction would not be expected. At 0.2 mm resolution the ECG computation is faster than at 0.4 mm, likely because in this case the lead field has the same resolution as the reaction-diffusion model and the code then avoids an index conversion. Figure 8A, shows how the computation times scale with the number of cores used for a single lead-field resolution of 1.0 mm. The reaction-diffusion simulation and the ECG computation scale well. Initialization time increases with the number of cores, due to increasing communication for mesh distribution and data input. Tests with higher and lower lead-field resolutions, not res, lead-field resolution in mm; sim, total simulation time; init, initialization time. Time is given as average ± standard deviation over 5 simulations, in seconds. Performance presented in the figure, showed that the initialization time was highly variable and had no clear relation with the resolution (and thus the storage size) of the field. Rather, the number of collective read operations seemed to be determining. The black trace in Figure 8A shows the scaling of a full solution (FSC method). It is over 2 orders of magnitude slower than the lead-field ECG and stops scaling at 7,168 cores. Figure 8B shows how the ECG computation time scales with the number of nodes for all tested values of lead-field resolution. Lead-field resolution is seen not to affect the scaling with the number of cores. Generally the time decreases slightly with decreasing resolution but, as in Table 2, the computation at 0.2 mm was faster than the one at 0.4 mm. DISCUSSION This study shows that a lead-field approach is an attractive solution for ECG simulation on (large) parallel computers whenever the number of ECG leads is smaller than the number of samples. It is about 100 times faster than a full solution, scalable to more than 10 4 cores, and does not cause a significant loss in accuracy. Lead fields can be stored at a resolution as low as 2 mm, meaning that they do not use excessive disk space even for a few hundred leads. Previous Work on Lead Fields The concept of lead fields was initially proposed by McFee and Johnston (1953) as a method to understand how ECG leads "view" the heart. Their purpose was in the first place to design leads that would be better in the sense that their fields would be more uniform inside the heart muscle (McFee and Johnston, 1954). Later the idea has been adopted for the purpose of accurate numerical simulation of the ECG (Geselowitz, 1989) and even local electrograms inside the heart (Colli-Franzone et al., 2000;Western et al., 2015). The idea to use lead-field methods for ECG simulation has been widely adopted. While the very earliest studies did not use them, for example because they computed only a small number of potential distributions (Gelernter and Swihart, 1964) or because a full solution required less memory (Barr et al., 1966;Barnard et al., 1967), numerous studies are based on some form of lead fields or transfer coefficients between V m in the heart and φ e on the body surface (Horacek, 1973;Miller and Geselowitz, 1978;Mailloux and Gulrajani, 1982;Aoki et al., 1987;Lorange and Gulrajani, 1993;Trudel et al., 2004). Mailloux and Gulrajani (1982) and further work from the same group (Lorange and Gulrajani, 1993;Trudel et al., 2004) used transfer coefficients that are mathematically identical to lead fields. Their transfer coefficients were computed with a boundary element model (BEM) which accounted for heterogeneity of the torso, but not for anisotropy. They found that they needed <100 regions to define these coefficients, likely because their model was isotropic. In the anisotropic model used here the lead field changed considerably through the wall, requiring a much higher though not prohibitive resolution. Jacquemet (2015Jacquemet ( , 2017 evaluated the performance of the same (BEM-based) method on a reaction-diffusion model of the human atria and found that 1,000 regions sufficed for a 1% accuracy. Boulakia et al. (2010) reported that an ECG simulation based on a transfer matrix was 60 times faster than solving a coupled heart-torso problem. They were using a finite-element model with about 1 million tetrahedra whose sizes gradually increased from the heart to the torso surface, and a serial code. Despite the obvious differences in methods the speedup was very similar to what was found in the current study. Electrocardiographic inverse modeling studies that used volumetric transmembrane potentials or current dipoles as their source models have also used transfer coefficients that are similar to lead fields (Liu et al., 2006;Wang L. et al., 2013). An interesting alternative is a mixed approach in which anisotropic regions such as the heart and skeletal muscle are handled with finite elements and isotropic regions with boundary elements (Pullan and Bradley, 1996), resulting in fewer degrees of freedom than a complete volume discretization. There is a considerable body of literature dedicated to the problem of solving body-surface potentials from epicardial (extracellular) potentials (Barr et al., 1977;Pilkington et al., 1987;Stenroos and Haueisen, 2008), which has found an application in cardiac inverse modeling (Greensite and Huiskamp, 1998;Ramanathan et al., 2004;Shou et al., 2008). A formulation in terms of transmembrane potentials on the (endocardial and epicardial) surface of the cardiac muscle is possible if equal anisotropy of the intracellular and extracellular domain is assumed (Geselowitz, 1989;van Oosterom and Jacquemet, 2005) and is also used to solve cardiac inverse problems (Oosterhoff et al., 2016). Strengths and Limitations ECG simulation based on lead fields is very fast and as scalable as a monodomain reaction-diffusion model. This makes it suitable for inclusion in the same model run on a large-scale parallel computer or a GPGPU, in contrast to full solutions, which would limit the scalability of the entire computation. This advantage is present whenever the number of ECG samples to be simulated exceeds the number of leads. Lead-field methods can also be used to compute local electrograms in the heart but this may require a higher spatial resolution at least near the electrode (Colli-Franzone et al., 2000). For detailed spatial mapping of potentials, either in the heart or on the torso surface, lead-field methods are less advantageous, as the number of locations might exceed the number of samples and may even be so large that the storage of the lead fields becomes a performance bottleneck. In such cases full solutions remain the method of choice and a relatively long solution time will have to be accepted. Although new developments in scalable preconditioners may improve the situation somewhat (Munteanu et al., 2009;Ottino and Scacchi, 2015), it is unlikely that full solvers will ever scale as well as an ECG computation based on lead fields. It would also be challenging to use a lead-field approach in an electromechanical, deforming heart model. A lead field that would be deformed with the mesh might be a reasonable approximation but this has not been tested here. The results of this study also suggest further improvements, in the first place the use of non-uniform mesh density for lead-field computation. Comparison of ECGs computed at 0.2 and 1.0 mm resolution showed that the latter had artefactual notches of about 0.05 mV amplitude in the QRS complex, due to misrepresentation of fiber orientation at locations where this orientation changed rapidly. This applied to both full solutions and lead-field ECGs. To avoid such artifacts one could try to ensure a smooth fiber orientation throughout the model (Bayer et al., 2012), but this can be challenging at the interventricular junctions, or whenever measured fiber orientations rather than rule-based orientations are used. The only alternative seems to be computation of the lead field with a mesh at the same resolution as the reaction-diffusion model inside the heart, and for improved efficiency a lower resolution elsewhere in the torso (Pullan and Bradley, 1996;Boulakia et al., 2010). While the computations could still be hard on a mesh with a wide variation in element size, the memory requirements would be much lower than the 12 TB reported here for the reference torso model. Another possible improvement that would be relevant for very accurate computations with high-resolution lead fields is to develop suitable compression methods for lead-field data. Very likely the regularity of the field could be exploited by using fixedpoint numbers in combination with spatial differentiation and a variable-length encoding. In Figure 8A, a particularly unfavorable scaling of the initialization phase was shown for the propagation model with lead-field ECG. This was probably due to an issue with the collective reading operation in the MPI library that was used, but also to the fact that for this feasibility study little care had been taken to organize this efficiently-after all the specifications for this code depended on the outcome of the study. With these results in hand it should be possible to avoid this problem by using a more efficient storage format and organizing the read operation in a different way. The figure also shows that the FSC method takes an order of magnitude more time than the reactiondiffusion model. This difference is partly due to the small solver tolerance that was chosen for this study. Applications The use of lead-field methods simplifies the workflow for largescale cardiac simulations, as it allows the ECG to be computed on the fly with very little overhead during a reaction-diffusion simulation on a mesh of the heart alone. Moreover, its high scalability allows the resolution of the models to be increased without causing a disproportional increase in the time needed for ECG computation. The results of this study are not only relevant for work on large-scale computers but also for simulations on generalpurpose graphics processing units (GPGPU). Reaction-diffusion simulations on GPGPUs have been reported by several groups (e.g., Bartocci et al., 2011;Neic et al., 2012;Mena et al., 2015;Kudryashova et al., 2017), recently even for a whole human heart model run on a desktop computer (Vandersickel et al., 2016). The strength of a GPGPU is that it provides thousands of parallel processors for the price of a single CPU. However, communication between these processors is a distinct weakness. With a method based on lead fields it is nevertheless possible to add rapid ECG computation to a model running on a GPGPU. Pezzuto et al. (2017) have recently reported such a method, though in combination with an eikonal model rather than a reaction-diffusion model. In the context of ECG inverse models and model personalization a variety of methods has been reported ranging from infinite-medium potentials (Giffard-Roisin et al., 2017;Neic et al., 2017) to full-torso bidomain solutions (Wang D. et al., 2013). A lead-field approach could offer a solution that combines the speed of the former (if the computation of the lead field itself is excluded) with the accuracy of the latter. Only methods based on equivalent double layers (Geselowitz, 1992;van Oosterom and Jacquemet, 2005) offer more efficiency as they need to evaluate only the surface of the heart, but the price for this efficiency is that these methods neglect anisotropy. A lead-field approach combined with an eikonal-diffusion model for cardiac propagation (Konukoglu et al., 2011;Jacquemet, 2012;Neic et al., 2017) could soon be a practical solution for ECG inverse problems with an accuracy very close to the state of the art in forward modeling of the ECG. CONCLUSION Lead fields are a practical alternative for full-torso solutions when the number of ECG leads that need to be simulated is smaller than the total number of samples that will be calculated. The method is fast and highly scalable. Lead fields can be stored at a resolution as low as 2 mm without unacceptable loss of accuracy. AUTHOR CONTRIBUTIONS The author confirms being the sole contributor of this work and approved it for publication.
9,109.8
2018-04-20T00:00:00.000
[ "Computer Science", "Engineering", "Medicine" ]
Distributed Active Learning Strategies on Edge Computing Fog platform brings the computing power from the remote cloud-side closer to the edge devices to reduce latency, as the unprecedented generation of data causes ineligible latency to process the data in a centralized fashion at the Cloud. In this new setting, edge devices with distributed computing capability, such as sensors, surveillance camera, can communicate with fog nodes with less latency. Furthermore, local computing (at edge side) may improve privacy and trust. In this paper, we present a new method, in which, we decompose the data processing, by dividing them between edge devices and fog nodes, intelligently. We apply active learning on edge devices; and federated learning on the fog node which significantly reduces the data samples to train the model as well as the communication cost. To show the effectiveness of the proposed method, we implemented and evaluated its performance on a benchmark images data set. I. INTRODUCTION Internet of Thing (IoT) is growing in adoption to become an essential element for the evolution of connected products and related services.To enlarge IoT adoption and its applicability in many different contexts, there is a need for the shift from the cloud-based paradigm towards a fog computing paradigm.In cloud-based paradigm, the devices send all the information to a centralized authority, which processes the data.However, in Fog Computing (FC, [1]) or Edge Computing (EC, [37]) the data processing computation will be distributed among edge devices, fog devices, and the cloud server.FC and EC are interchangeable.This emergence of EC is mainly in response to the heavy workload at the cloud side and the significant latency at the user side.To reduce the delay in fog computing, the concept of fog node is introduced.The Fog Node (FN) [2] is essentially a platform placed between Cloud and Edge Devices (ED) as middleware, and it will further facilitate the 'things' to realize their potentials [3].This change of paradigm will help application domains, such as industrial automation, robotics or autonomous vehicles, where real-time decision making by using machine learning approaches is crucial. The research leading to these results has received funding from the European Unions Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 764785, FORA-Fog Computing for Robotics and Industrial Automation.This work was further supported by the Danish Innovation Foundation through the DanishCenter for Big Data Analytics and Innovation (DABAI). Convolutional Neural Network (CNN), which is one subtype of the artificial neural network and models the system by observing all the data during training at a single place.Notably, the emergence of CNN [27] introduces the common usage of the neural network as it is more efficient concerning computation cost comparing with (fully connected) neural network.However, to train the model, we need to access all the data, which may create communication bottlenecks and privacy issues [28].Generally, to train a CNN model, users send all the data to a single machine or a cluster of machines in a data center.However, this sharing operation with the central authority in data centers costs both network usages and breaching of user privacy, since the user doesn't want to share personal, sensitive information, e.g., legally protected medical data. FL [6] allows the centralized server training models without moving the data to a central location.In particular, FL is used in a distributed way, in which the model is built without direct access to training data, and indeed the data remains in its original location, which provides both data locality and privacy.In the beginning, a server coordinates a set of nodes, each with training data that cannot be accessed by server directly.These nodes train a local model and share individual models with the server.The server uses these individual models to create a federated model and sends the model back to the nodes.Then, another round of local training takes place, and the process continues.Nevertheless, this extra work on edge devices has to be minimized by selecting the most important data samples needed to build the local model.In this context, we want to use Active Learning (AL) as a more effective learning framework.AL chooses training data efficiently when labeling data becomes drastically expensive. Motivated by the above mentioned, the research issues and possible direction, we propose a new scheme.In literature, there exist some papers that discuss the application of machine-learning algorithm directly on the Fog node platform [30] [31].As we have already discussed, to efficiently use cloud and fog infrastructure, we need to delegate the work among them.Hence, in this paper, we propose a new efficient privacy-preserving data analytic scheme in Fog environment.We offer using federated learning at the centralized fog devices to create the initial training model.To improve the perfor-mance further, we recommend using Active Learning at the edge devices, by selecting the sample points effectively.All in all, we propose a possible solution in Edge Computing setting, where the user privacy, training cost, and upload bottleneck are the main issues to address.Our strategy may reduce the training cost by applying AL and preserve the user privacy and reduce the communication by using FL.Moreover, the proof of concept is demonstrated by applying the method on a benchmark dataset. The remainder of this paper is organized as follows.Section II explains preliminary concepts, in particular, CNN, FL, AL framework, model uncertainty measurement, and previous studies related to our work.In Section III, we introduce the proposed scheme.Section IV covers the details of our experimental design and data collection strategy followed by a discussion on our results.Section V concludes the paper. II. PRELIMINARY CONCEPTS AND RELATED WORK In this section, we discuss the different techniques that are essential for our scheme.We also present a brief overview of the major research studies related to our work. A. Convolutional Neural Network Convolutional Neural Network (CNN) is proposed in [9] for the first time, which addressed the computational problems imposed by the fully-connected neural network, in particular, deep neural network.It's composed by the following layers: • Fully-connected Layer: Here, all the neurons in a fully connected layer connect to all activations in the previous layer.We compute the output for node i at layer l, denoted as p l i , using its weight as w l .iand input from the previous layer as o l−1 j , i.e., CNNs commonly have a huge number of parameters, and it will lead to huge communication costs by sending updates for these many values to a server leads.Thus, a simple approach to sharing weight updates is not feasible for larger models.Since uploads are typically much slower than downloads, it is acceptable that users have to download the current model, while compression methods should be applied to the uploaded data. B. Federated Machine Learning Federated learning (FL) is a collaborative form of machine learning where the training process is distributed among many users; this enables to build machine learning systems without complete access to training data [6].In FL, the data remains in its original location, which helps to ensure privacy and reduces communication costs.The server or the central entity does not do most of the work but only coordinates everything by a federation of users.In principle, this idea can be applied to any model for which the criterion of updates can be defined, which naturally includes the methods based on gradient descent, which nowadays most of the popular models do.For instance, linear regression, logistic regression, neural networks, and linear support vector machines can all be used for FL by letting users compute gradients [33] [34]. Concerning data, FL is especially useful in situations where users generate and label data by themselves implicitly.In such a situation, Federated Learning is very powerful since models can be trained with the massive data that actually is not stored and not directly shared with the server at all.We can thus make use of the massive data that we could otherwise not have used without violating the users' privacy.FL aims to improve communication efficiency and train a high-quality centralized model.The centralized model is trained over the distributed client nodes, which we refer to edge devices in the FC setting.The model is locally trained on every device, and the devices update the refined models to the Fog Node (server node) by sending the parameters of the models.The FC might aggregate the parameters in different ways, for instance, average parameters, choose the best-performed model or sum the weighted parameters. We define the goal of federate learning to learn a model with parameters embodied in matrix from data stored across a large number of clients (Edge Devices).Suppose the server (FN) distributes the model (at round t) W t to N clients for further updating, and the updated models are denoted as Then, the clients send the updated models back to the server, and the server updates the model W according to the aggregated information. Where α can be uniformly distributed or according to the t − 1 round performance, we use the former one in our work, namely, average the parameters.The learning process can be iteratively carried out. C. Active Machine Learning Active learning (AL) is a particular case of machine learning in which a learning algorithm interactively queries the user to obtain the desired outputs at new data points.Typically, AL achieves higher performance with the same number of data samples, using the same method, for example, support vector machine, neural network, while with a sophisticated data acquisition [4].In other words, AL achieves a given performance using less data.Active learning can be divided into two categories: pool-based and stream-based [29].Poolbased active learning queries the most informative instance from a large pool (See Fig 1), whereas the stream-based one typically draws one at a time from the input source, and the learner must decide whether to query or discard it.In this paper we consider pool-based active learning.The critical point is the way of choosing training data, which is carried out by an interaction between data pool and model: the model strategically picks the new training data according to some specific criteria (refer to Acquisition Function in subsection II-E).The authors in [12] showed that a pool-based support vector machine classifier significantly reduces the needed data to reach some particular level of accuracy in text classification, analogously, [13] for image retrieval application. Active learning is an appropriate choice when i) labeling data is expensive, or ii) limited data collection.Initially, the researchers fit the machine learning algorithms that mostly work for tabular data to the active learning framework.Recently, it starts registering with the deep neural network, though, it seemingly contradicts with each other as deep neural network typically requires large training data.In the next subsection II-D, we will introduce a vital concept, Bayesian neural network, which is the foundation that AL can work appropriately on image processing by using a neural network. D. Bayesian Neural Network approximating by Dropout In this subsection, we will briefly introduce the Bayesian neural network, from which the dropout is applied to approximate the variational inference, proposed by [32].Bayesian network is defined as placing a prior distribution over the weights of the neural network.Let's define the weights of neurons as W = (w i ) L i=1 , we are interested in the posterior p(W |X, Y ), given all the observations X, Y .As we know, the posterior is intractable integral from [35].In [32], it is solved by approximation of the real distribution and Monte Carlo.Thus, we define the approximating variational distribution q(W i ) at layer i as: where Z i,j ∼ Bernoulli(p i , dropout), for i = 1, ..., L, j = 1, ..K i−1 (5) and M i are variational parameters to be optimized.The diag(.) maps vector to diagonal matrices, whose diagonal are the elements of the vector.K i indicates the number of neurons in layer i.By given input x and the weights w, which can be sampled from q(w), the predictive distribution of our interest is defined as: ) with w t ∼ q(w), and it is sampled by applying dropout on the corresponding layer.This is referred to as Monte Carlo dropout (MC-dropout). E. Acquisition Function The acquisition function is a measure of how desirable the given data point is for minimizing the loss or maximizing the likelihood.In this paper, we are going to use MC-dropout to sample weights and a particular type of uncertainty-based method called Maximal Entropy to measure the uncertainty, as it outperforms the others reported in [16]. Uncertainty-based methods aim to use uncertain information to enhance the model during the re-training process.It plays the role of the exploitation while acts as the exploration part.We will introduce three different ways to estimate uncertainty. -Maximal Entropy: H[y|x, D train ] is the predictive entropy expectation as defined in [10]. H[y|x, D train -BALD: (Bayesian Active Learning by Disagreement [14]) measures the mutual information between data and weights, and it can be interpreted as seeking data points for which the parameters (weights) under the posterior disagrees the most. I[y; w|x, D train -Variational ratios: it maximises the variational ratio by considering the followings [15]: It is similar to Maximal Entropy, but less effective as reported in [16]. III. PROPOSED SCHEME In this section, we discuss the functioning of the different components of the proposed scheme.The primary objective of the proposed scheme is to generate the model by an iterative and distributed way using the fog nodes and edge devices.The overall scheme is shown in Fig. 2, where between edge devices and the cloud server, the fog nodes work as a middleware.Here, a fog node is connected to the edge devices which have a similar task to implement the specific application.For instance, a fog node might be linked to all the surveillance cameras to detect the particular object.Every camera possesses a trained model dispatched by the FN, and it keeps training by the images generated locally under Active Learning framework.It is followed by Federated Learning, namely, the models individually are trained by every device uploading the weights of the (refined) models to FN.The whole process is sketched as follows: • Firstly, F N trains an initial model "M" using "m" data samples, where "m" is very small which barely helps the model to learn.To generalize, we denote model as M t , where t is the round.• FN dispatches the model M t to the edge devices. For example, let's say, we have four devices, called IV. EXPERIMENTS AND RESULTS In this section, we discuss the experimental setup along with the data set used for our evaluation, and the results of these experiments. Experiment Setup: Initially, we trained the CNN model by 20 images at the centralized node (FN), and then send the model to the edge devices.On the devices side, we further trained the model by additional data points.They are acquired by choosing top 10 data samples that have the highest entropy, and this operation will iterate several times.Notably, the model is independently trained by the edge devices.Then it is followed by updating the refined models from all the devices to the centralized FN.The FN will average the parameters of the models, for the future data analysis.All the experiments are implemented by Python language, more specifically, the Pytorch package [36].The codes are run in Mac Os system (High Sierra) with version 10.13.6, with RAM 16 GB. Data Set: We implemented the methods on MNIST dataset [8], which is a real data set of handwritten images, with 60000 for training data and 10000 for the test, totally ten classes.All the images have already been pre-processed, built with the size as 28 * 28.It is the basic benchmark to test the performance of approaches in machine learning. A. Experiment I: random sample vs active learning on edge device To demonstrate the effectiveness of Active Learning, We compare its performance with randomly chosen data, the experiments are carried out on edge devices, as shown in Fig. 3 and 4 for 10 acquisitions and 20 acquisitions.The outcome of every device with active learning outperforms randomly chosen data points.Notably, here we only want to show that AL has better performance than randomly choosing training samples, not for the sake of the state-of-the-art result. B. Experiment II: AL acquisition number In this series of experiment, we study how does the number of training data influence the performance by plotting the learning curve.Recall that during every data acquisition, we include ten additional images for further training.Fig. 5, 6, 7 illustrate the learning curve of edge devices for 10, 20, 40 acquisitions accordingly.Here, we run the experiments for five times and we also plot the standard deviations.Again, the training on edge devices are independent; namely, with the different dataset, which conform to the practical situation, the data is generated locally and independently. Assume that all the data generated from edge devices are from the same distribution.The first observation is every device has its own learning curve as the data at every device is different though they are from the equal distribution.In addition, we build the data pool by randomly choosing 200 images at every iteration of acquisition, from the whole dataset (10000), in order to reduce the computing cost as all the data in the pool are being measured the uncertainty.It is the main reason we run the experiments several rounds to test the real performance of our method.This learning curves of devices are highly related to the aggregating strategy on the FN side.As before we have already discussed the options: choose the optimal model, average the parameters of models from diverse devices or place the weights on the models.Heuristically, when training data size is small, the accuracy between devices vary, thus, picking the optimal model leads a higher accuracy than averaging parameters, and our method mainly shows its strength when the dataset is small.Instead, when the training size is large, though, the learning curves are not necessary the same, they end up with the similar performance, shown in Fig. 7.Moreover, We compared the accuracy on FN by applying our approach, with the result obtained by training a dataset with the size triple bigger (4*N) than on every edge device (N) as we have four edge devices in our experiment.For instance, if we train the model by 100 data points on the edge device, then we compare it with the result directly training 400 data samples on FN.The details are shown in Table I, and the columns indicate the different number of acquisitions from data pool, during every acquisition we pick ten images, Acq 10 means the model is further trained by another 100 images on every edge device.For the sake of comparison, we directly trained the initial model with 400 images since we have four devices, every device is trained by 100 images.And then we compared the accuracy with two aggregation strategies: average and optimal model.Note that it is arguable that how many images we train directly on FN to compare since the model is not directly trained by 400 images when we apply FL, we train the model by 100 images on every device.Nevertheless, here we train the model by training data with the size equal to the number locally trained on every device time the number of devices, considering the worst case.Notably, when the number of edge devices is large, the advantage of AL is not as obvious as the case (4 edge devices) we demonstrated before.Assuming with 1000 data points, if we have 4 edge devices, every one is trained by 250 images, while if we have 20 devices, then every device 'sees' 50 images.As we can guess, in the second case, the centralized device that uses the averaged weights works worse than one machine trained directly by 1000 images (50 << 1000).It can be solved by the communication between devices, cascading the training process, namely, after one device completes training, shares the weights with the close device. V. CONCLUSION In this paper, we for the first time discussed Active Learning in a distributed setting tailored to the so-called Fog platform consisting of distributed edge devices and a centralized fog node.We implemented active learning in edge devices to down scale the necessary training set and reduce the label cost.We presented evidence that it performs similarly to centralized computing with a reduced communication overhead, latency and harvesting the potential privacy benefits.In the future, we will do more experiments on different setting (large number of edge devices) and offering the corresponding solution.In addition, we will study the additional acquisition functions and also address privacy issues in more details. , and each receives M t from F N. • All edge devices implement Active Learning locally with Maximal Entropy acquisition function.More specifically, during every acquisition (totally R acquisitions), edge devices train M t by another "N" (N > m) data samples.• Next, Edge devices label the new models.In the example, E 1 , E 2 , E 3 , E 4 label their local model as M t 1 , M t 2 , M t 3 , M t 4 respectively.• Then, the edge devices upload the weights of models M t 1 , M t 2 , M t 3 , M t 4 to the centralized F N. • F N aggregates the weights either by averaging or choosing the best-trained model, and pass it to next round t+1 if necessary.In our experiments, we set m equal to 20, and we only consider one round.Moreover, we can update the model on the fog node side during every acquisition. Fig. 5 . Fig. 5. Learning Curve of Edge Devices for 10 acquisitions of data. Fig. 6 . Fig. 6.Learning Curve of Edge Devices for 20 acquisitions of data. Fig. 7 . Fig. 7. Learning Curve of Edge Devices for 40 acquisitions of data. TABLE I FOG NODE PERFORMANCE WITH/WITHOUT FEDERATED LEARNING
5,051.2
2019-06-01T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Fecal Genetic Mutations and Human DNA in Colorectal Cancer and Polyps Patients Background: Colorectal cancer (CRC) is one of the most frequent cancers. Genetic mutations in CRC already described can be detected in feces. Microarray methods in feces can represent a new diagnostic tool for CRC and significant improvement at public health. Aim: to analyze stool DNA by human DNA quantify and microarray methods as alternatives to CRC screening. Method: Three methods were analyzed in stool samples: Human DNA Quantify, RanplexCRC and KRAS/BRAF/PIK3CA (KBP) Arrays. Results: KBP array mutations were presented in 60.7% of CRC patients and RanplexCRC Array mutations in 61.1% of CRC patients. Sensitivity and specificity for human DNA quantification was 66% and 82% respectively. Fecal KBP Array had 35% sensitivity and 96% specificity and RanplexCRC Array method had 78% sensitivity and 100% specificity. Conclusion: Microarray methods showed promise as potential biomarkers for CRC screening; however, these methods had to be optimized to improve accuracy and applicability by clinical routine. Introduction Colorectal cancer (CRC) is one of the most frequent cancers worldwide. It is the second most common cancer in Brazil, with estimated risk of 36360 new cases (INCA, 2016). The most effective way to treat CRC is surgery resection, but many patients die from disease progression. Risk of developing CRC increases with age, inadequate living habits and inherited genetic mutations are other factors that can increase risk of CRC (NCCN, 2018). Early CRC detection can be accomplished through screening programs that reduce incidence and mortality rates (Lee et al., 2014, Schreuders et al., 2015. Fecal Occult Blood Testing (FOBT) by immunological test (FIT) or guaiac method (Lee et al., 2014) and colonoscopy are the most common screening test for CRC and polyps. Colonoscopy reduce incidence by 40% through the polyps removal but is an invasive and expensive method (NCCN, 2018). Approximately 0.01% of DNA found in stool is of human origin. Potential biomarkers for screening of CRC such as KRAS, TP53 and BAT26, among other genes, are being analyzed and considered as stool markers complementary to existing methods of cancer identification (Vaughn et al., 2011, Tian et al., 2013, Colucci, 2013. Currently, the commercially available in the United States called Cologuard® test analyzes KRAS mutations, NDRG4 and BMP3 methylation and ACTB gene (MLDT, 2014). The study of DNA by fecal samples is a non-invasive method and can have higher adhesion and indication even in ageing patients or with serious comorbidities (MLDT, 2014, Itzkowitz, 2007. The purpose of this study was to analyze stool DNA by human quantification and mutation microarray methods as alternatives to colorectal cancer screening. Materials and Methods The study had been approved by a local committee No. 1,173/09. All subjects were informed of study objectives, collection procedures and signed a consent form. Patients collected stool samples at home, prior to colonoscopy, froze them (-20°C) Tissue samples were stored at -80°C. DNA extraction followed manufacturer's protocol for DNA isolation obtained from human tissues (QIAGEN 56304 -QIAamp DNA Micro Kit, Hilden, Germany). The stool samples delivered were stored at -80ºC. DNA extraction was performed following manufacturer's protocol for DNA isolation obtained from human feces for analysis (QIAGEN 51504 -QIAamp DNA Stool Mini Kit, Hilden, Germany). DNA was extracted in triplicate for each stool sample, as recommended by the RanplexCRC Array protocol and stored at -20°C until subsequent mutational analysis. Total DNA present in feces and DNA extracted from colonic tissues were quantified by spectrophotometry at 260 nm (NanoDrop 1000 -Thermo Fisher Scientific, Waltham, Massachusetts, EUA). DNA extracted from fecal samples were also mixed in the same tube to form a pool of stool DNA from everyone. A second measurement of pool samples were performed by spectrophotometry at 260 nm. Human DNA was quantified in DNA pool for each patient by Real-Time PCR (Real Time PCR System StepOnePlus, Applied Biosystems, Foster City, Califórnia, EUA). One Standard Human Quantifier (Applied Biosystems, Foster City, Califórnia, EUA) human quantification kit was used. Mutations in TP53, KRAS, BRAF and APC genes in stool DNA were studied following manufacturer's protocol of Evidence Investigator TM RanplexCRC Array based on 28 simultaneous mutation detections in these genes by microarray. The protocol is based in DNA extraction, probe hybridisation, ligation, PCR amplification and microarray hybridisation, optimized for the qualitative assessment stool DNA. The Evidence Investigator TM KBP Array detects 20 mutations in KRAS, BRAF and PIK3CA genes in DNA extracted from CRC tissue. The manufacturer's protocol was followed accordingly. Note that this method has been optimized for use with CRC tissue (fresh/frozen tissue or FFPE) and not for stool or polyps. The possibility to use this method with stool DNA samples may contribute to screening for colorectal cancer. Sanger sequencing was used to confirm positive mutations in DNA from target tissue analyzed by the KBP Array. Statistical Analysis Parametric statistical tests were used because data is quantitative and continuous. ANOVA and T-test for means comparison based on variance. Concordance Index was performed to measure degree of agreement between two variables and/or results. In interpreting results, Kappa <20% was considered negligible; 21 to 40% minimum; 41 to 60% regular; 61 to 80% good; above 81%, great. In comparison between two or more variables and/or their levels, chi-square was performed. ROC curve was performed to determine cut off level for human DNA in stool samples with greatest sensitivity and specificity. A significance level of 5% (p=0.05) was defined for this assessment and confidence intervals constructed throughout the work were set at 95%. Among the stool samples, 47 were from CRC patients, 44 controls and 16 were from patients with polyps. The total DNA found in feces was not different between the groups. In regards to human DNA quantification (HDNA), we observed an average 0.46 ng/µl for controls, 15.05 ng/ µl for CRC and 0.10 ng/µl for polyps (p <0.0001). The percentual of HDNA percentage in the CRC group, control group and polyp group were 0.4%, 0.12% and 16.5%, respectively of total DNA. The mean HDNA was higher in tumors localized in the left and rectum (18.54±33.24 vs 0.31±0.57, p=0.002). There was no difference between the stages (p=0.247). KBP Array in Tissue and Stool DNA From the 92 tissue samples analyzed, 30 had one or more mutations and 62 had wild-type (WT). The mutations occurred in 53% in CRC, 40% in the polyp group and 7% in controls. KRAS gene mutations were the most prevalent, mainly in the CRC group (41.2%) ( Table 1). Genetic sequencing was performed in 29 (97%) tissue samples with mutation by KBP Array. Twenty-two samples had mutation confirmed by Sanger. KRAS mutations were prevalent in CRC and polyp groups (52% and 32%, respectively). KRAS genes had 85% mutations method, 28 had a mutation in one or more genes, 62 were wild-type and 20 were considered inconclusive (impossibility of classification results between mutated or WT). Mutations results showed 60.7% were in CRC and 35.7% in polyp group. KRAS mutations were prevalent in CRC and BRAF in the polyp group. confirmed and BRAF and PIK3CA confirmed in 100%. Mutations in more than one gene were also confirmed at 100%. Polyp group had 80% mutation confirmed to KRAS gene, but only 25% for BRAF gene. In total of 110 fecal samples analyzed by KBP Array Mutations were found in 30 (32.6%) tissues and 28 (31.1%) of stools. There was no statistical difference between mutation percentages or between genes mutated comparing tissues and stools. RanplexCRC Array versus KBP Array in Stool DNA RanplexCRC method had 48 stool samples analyses, 18 had mutations, 61.1% were in the CRC and in 20 samples, the results were inconclusive. Agreement, between the 2 methods studied, was assessed by Kappa index, where we observed that they had regular concordance rate between total results (44.5%). When comparing KBP with RanplexCRC, agreement percentage between the two methods for CRC increased to 64% (Table 2). Correlation between human DNA quantification and conclusive results prevalence by KBP and RanplexCRC Arrays Conclusive KBP Array fecal results had a higher HDNA mean (10.37ng/ul versus 0.037ng/ul inconclusive tests, p=0.001). RanplexCRC Array results also had a greater HDNA quantity (23.9 ng/ul) within conclusive tests compared to inconclusive tests (0.75ng/ul, p=0.049). Discussion According to the Globocan 1.800.000 new cases of CRC and 881.000 deaths occurred (Bray et al., 2018). Although most studies shown decreased mortality from CRC after screening methods, studies should consider adherence to invasive tests, side effects and cost (Richter, 2008, Bevan andRutter, 2018). In this study, feces were collected from individuals with prior request colonoscopy. Studies show that around 50% of individuals asked to collect stool for occult blood, perform collection (Grazzini et al., 2008, Larsen et al., 2018. In our study, approximately 70% of patients referred for colonoscopy were ready to bring a stool sample. Through molecular biology progress, new DNA stool tests have been developed with the purpose of early colorectal cancer or polyps detection. We found a relative high quality in total DNA among samples, indicating that most participants followed our guidelines. DNA amounts in feces between 70 and 300 ng/μl was considered optimal by extraction method chosen in this study. CRC develop mainly from polyps, histologically classified as adenomas. Clinical features of individuals with polyps, had 75.8% adenoma polyps and 24.2% hyperplastic polyps. For many years this kind of damage was considered as non-neoplastic lesion. Bauer and Papaconstantinou (2008) called attention to hyperplastic polyps. According to these studies, such injuries account for 80 to 90% of serrated polyps. Microvesicular hyperplastic polyp subtype may present mutation in BRAF oncogene suggesting this type of injury as a precursor serrated adenoma that can be a CRC precursor (Sweetser et al., 2013). In this study we used as the main material for analysis, DNA in feces. In cancer patients due to exfoliation of tumor cells average human DNA content (15.05±30.7 ng/ul) was approximately 33 times higher when compared to the control group. Similar results have already been published by our group (Teixeira et al., 2015). Among patients with polyps, no differences were observed when compared to the control group. These lesions appear to lose proliferative characteristics, and even cell adhesion apoptosis, but to a lesser extent when compared to tumor cells. This study intended to analyze known CRC mutations by microarray analysis and compare to DNA quantification in feces. KBP Array method was developed to evaluate tissue samples from patients with CRC. The array had never been assessed for stool or polyp samples and as such is not optimized for these sample types. Among tissue samples, mutations were observed in 53% of patients with CRC, 40% in polyp group and 7% in the control group. KRAS mutation was mainly found in 41% case group (Table 1). Mutations at codon 12 and 13 promote oncogenic KRAS gene potential and the most frequently described within the literature (Tsiatis et al., 2010). To confirm mutations, Sanger sequencing was performed. Array results were confirmed in 76% of cases. Genetic sequencing is considered a highly sensitive and specific methodology in genes study (Yamane et al., 2014). Using stool DNA in a method developed for tissue DNA was one of the biggest challenges of this study. As stated earlier, total DNA present in feces has a small percentage of HDNA, representing caution in handling, from extraction, storage, and manipulation. In most studies using stool DNA, researchers needed to "treat" DNA before analyzing it. This treatment mostly consists of amplification reaction for capturing human DNA in stool using specific sequences of nucleotides, the probes calls. In this work, a method not developed specifically for stool DNA samples, i.e. without possibility of capturing human DNA prior to genetic mutation analysis was used. Through KBP Array we observed 60.7% of mutations in cancer patients and 35.7% adenomatous or hyperplastic polyps. As observed in tissue, the most prevalent mutation was in KRAS in CRC (51.6%). In polyp group, BRAF gene mutation was most frequent (30.3%) ( Table 2). Yamane et al., (2014) related two studies that demonstrated a high KRAS mutation frequency (45.2%) in serrated adenocarcinoma and suggest that a significant proportion of KRAS mutated CRC originates from serrated polyps and referenced high BRAF mutation frequency (V600E) among serrated carcinomas (82%), emphasizing that this mutation is a specific marker in the serrated pathway. We found 20 (18%) samples with inconclusive results, which can be explained by different amounts of human DNA present in feces of patients with and without cancer, 65% of patients belonged to the control group and only 15% from CRC. Genetic mutation analysis was also carried out with a method developed specifically for stool that identifies 28 mutations in four genes (APC, KRAS, BRAF, and TP53) involved in colorectal carcinogenesis, RanplexCRC Array. This method has as its main tool enrichment of specific regions of human DNA by PCR amplification of gene regions where mutations studied could be detected in other method steps. This assures researcher greater sensitivity for mutation analysis of DNA samples that are not purely human, as in stool DNA samples. Unlike the KBPArray method, RanplexCRC stool DNA was developed as a CRC screen. Among 20 samples from patients with cancer, mutation presence was observed in a lower percentage (61%). Excluding inconclusive samples, mutation percentage was higher (78%). Assessing RanplexCRC staged method, we observed 36% mutations in TP53, KRAS or APC genes and 18% in BRAF gene. In polyp group, mutation rate was lower in RanplexCRC, the greatest number of inconclusive tests was in the control group (30%) ( Table 3). Comparing RanplexCRC and KBP Arrays in the feces, no differences were found between these two methods and there was a concordance of 64 % in CRC. These results suggest that KBP Array developed for CRC tissue analysis can also be used for stool DNA. When performing sensitivity and specificity of methods excluding inconclusive results, we observed that KBP and RanplexCRC Array methods had a sensitivity of 35% and 78% for CRC. These results show that human DNA amount in stool is a key factor in colorectal cancer screening. DNA quantity also can be related to mutations found, mainly in the KBP Array. With a cut-off of 0.4 ng/ul and we observed that from 28 mutations positive on stool DNA by KBP Array test, 25 had human DNA quantified and from this, 7 had less than 0.4 ng/ul and 18 more than 0.4 ng/ul. Sensitivity for this was 56%. In 2014, Imperiale et al., (2014) compared a panel of 21 mutations on stool DNA with hemoccult II and found 51.6% of sensitivity versus 12.9% by hemoccult II. Ahlquist et al., (2012) studied stool DNA methylation in 4 genes and mutation in KRAS gene found a sensitivity of 78% and specificity of 90% for CRC. According to Bosch et al., (2012) smaller number of cells would be required to detect DNA methylation in relation to mutation studies which increases diagnostic sensitivity. In polyps, authors found 48% of sensitivity for adenomas ≥ 1cm, unlike our findings, where sensitivity was between 63% and 71% for polyps, regardless of size, in the stool DNA study. Some stool DNA mutations considered as a false positive can also indicate a mistake of small tumor during colonoscopy. The American Cancer Society (ACS) recommends as an alternative a screening study of stool DNA every 3 years (NCCN, 2018). In conclusion, this study can contribute to CRC screening since Human DNA Quantification in fecal samples can have a low cost and simple method to allow cancer group identification. Microarray methods should be promised as a potential biomarker for colorectal cancer screening, given that KBP Array identified several mutations in precursor genes in stool DNA and it can be completed in under 3 hours via DNA input. However, there is a need to optimize these methods to improve accuracy and ensure applicability by clinical routine.
3,656.8
2019-10-01T00:00:00.000
[ "Medicine", "Biology" ]
Complementarity in single photon interference - the role of mode functions and vacuum fields Single photon first order interferences of spatially separated regions from the cone structure of spontaneous parametric down conversion allow for analyzing the role of the mode function in quantum optics. In earlier experiments the role of the vacuum fields could be demonstrated in induced coherence experiments as the source of complementarity \cite{Heu14}. Here the spatial coherence properties of these vacuum fields are measured and as the physical reason for complementarity in single photon quantum optics demonstrated. Introduction Complementarity is one of the most important principles of quantum physics [1]. It is directly connected to the measurement problem [2]. In quantum optics light is detected as clicks of the single photons, which correspond to the transfer of an energy packet of hv photon to the detector, with the spatial, temporal, spectral and polarization filter restrictions of the detector setup. The measured intensity of bright light results from the statistical superposition of these "clicks". The possible positions of these "clicks" are given by the interference pattern of the measured light fields. For a single photon this interference is given by the overlay of the electric field components in the right order of the field operators belonging to this photon at the detector E photon [2]. The possibly coherent light fields produce interference fringes resulting in certain visibility V . In some cases the sources of the photons can be distinguished resulting in "which-path" distinguishability D. The complementarity principle in quantum optics answers the question: How coherent are the more or less distinguishable shares of the electric field (or how distinguishable are the paths of the more or less coherent light modes)? The upper limit for the combined measurement of both is D 2 + V 2 ≤ 1 [3]. The visibility is based on the coherence of the involved light modes in the different photon paths and is calculated from the maximum and minimum single photon count rates S of the interference fringes by V = (S max − S min )/(S max + S min ). It can be concluded that coherence in single photon measurements decreases if paths become distinguishable [4]. Thus the question about the physical details behind this quantum law of complementarity may be asked. In a previous set of experiments we investigated the physical background of complementarity in the temporal domain and showed the role of involved vacuum fields [16]. Using spontaneous parametric down conversion (SPDC) in induced coherence experiments first-order interference visibilities of more than 95 % were observed if the single photon was emitted potentially from two different SPDC-crystals, indistinguishable. And, as expected, the visibility dropped to zero if the single photon path, which means the source of emmission, could be determined, and vice versa. Because these photons were generated by three-wave-mixing of a coherent pump field with certain vacuum fields a deeper physical analysis was possible. High visibility could be obtained for the single photon interference, only, if the same vacuum field was acting in both crystals and no fixed phase relation occurred between the two photon waves if the involved vacuum fields could be distinguished. In this case the two waves of the single photonemitted synchronously by the two crystals even in the same TEM 00 -mode are not coherent in contrast to the classical expectation. In summary, the random phase relations between distinguishable vacuum fields were identified as physical reason for the complementarity principle in the temporal dimension in this case. [16]. Based on these results the relation of coherence and distinguishability for single photons in the spatial dimension is investigated, here. Quite some work was done to investigate the coherence properties of the emitted light from SPDC. For example the coherence area of the emission was investigated with the result that the source is incoherent across the pump spot indicating a thermal emitter [5]. This fulfills expectation because SPDC is a spontaneous process. A similar but more detailed result was worked out experimentally and theoretically [6,7]. In [7] it was shown that especially in radial direction coherence effects from phase matching and other details provide a more complex than thermal behavior of this SPDC light. In all these investigations the correlation between the coherence of the signal and idler photons was analyzed. These correlation and entanglement properties were also investigated in detail in [8][9][10]. The spatial emission can be nicely described using the Schmidt mode decomposition as investigated in [11][12][13][14]. But only very few investigations deal with the coherence of the single photons which is the aim of this work (see e.g. [5,6]). Therefore we investigated the transversal coherence of single signal photons generated via spontaneous parametric down conversion (SPDC) by measuring them with a TEM 00 -mode detector in two separated modes as illustrated in Fig. 1 and 2. In type I SPDC the two entangled signal and idler photons have opposite momentum components with respect to the pump photon and appear diagonally within the emmission cone as illustrated in Fig. 1. The selection of a single fundamental Gaussian mode (TEM 00 ) is always possible for any light source (see e.g. [12,14,15]). Using two tilted arms of a Mach-Zehnder interferometer we measure the relative coherence of two tilted and on top of each other realigned Gaussian modes of a single signal photon selected out of the light cone of SPDC which are emitted from the same volume of the crystal (see Fig. 1 and 2). As demonstrated in the theoretical section the single photons in the selected TEM 00detector modes are generated by three-wave-mixing of the coherent pump beam and a this way selected vacuum field. Therefore the coherence properties of these photons are directly dependent on the coherence properties of the involved vacuum fields. As result the properties of the vacuum fields can be identified as the physical background of the complementarity principle in the spatial dimension, in this case. The details will be discussed in the conclusion. Experimental The experimental setup is shown in Fig. 2. The 2mm long BBO crystal as nonlinear material was pumped by a diode-laser (Blue mode, Toptica) with the emission peak wavelength of 405 nm and a cw power of 30 mW. From the emitted light cone the signal-photons were selected on one side of the cone and fed into the Mach-Zehnderinterferometer at the first beam splitter BS 1 . In one arm of the interferometer a beam shifter was used to tilt and shift one of the interfering TEM 00 -modes (signal 2 in Fig. 1) radially and tangentially relative to the reference TEM 00 -mode (signal 1 in Fig. 1) in the other interferometer arm. Both modes were perfectly overlaid in the second beam splitter BS 2 to match the TEM 00 -mode of the detector. With this TEM 00 -mode detector the single photon interference of the two beams is measured. This detector is constructed with a transversal single-mode fiber and an aspherical lens in front of it. It is a selector for all photons belonging to the TEM 00 -mode of the fiber, only. The beam waist of this mode was aligned to match the TEM 00 -mode of the pump spot in the BBO crystal by size (165 µm in diameter) and position. Both have a Gaussian beam profile. With a delay ∆l realized with a linear translation stage (Newport) allowing longitudinal delays of 25 mm with an accuracy of 10 nm the interference fringes between signal 1 and signal 2 were measured and the visibility calculated. The resulting interference pattern was photographed with the EMCCD camera (see Fig. 2) on the other side of the beam splitter cube BS 2 . At this position also the tangential and radial distance between signal 1 and 2 was measured using a not shown HeNe-laser and a CMOS camera instead of the EMCCD. A spectral filter was applied in front of the single mode fiber with a spectral band width of 2.5 nm (FWHM) at a peak wavelength of 808 nm. For decreasing the signal to noise ratio of the visibility measurement the signal photon interference of first-order could also be measured in coincidence with the temporally correlated idler photons on the other side of the cone resulting in visibilities of almost one. But all visibility measurements given here were performed as single photon measurements without any corrections and not in coincidence. While the position of the idler photon detection was kept fixed at the opposite position of signal beam 1, a coincidence measurement between signal and idler photons allowed the determination of the distinguishability D = (R path1 − R path2 )/(R path1 + R path2 ). R path1 and R path2 are the coincidence rates of the idler photons with photons of signal 1 and signal 2, respectively. Results and Discussion Perfect alignment of the Mach-Zehnder interferometer results in an unstructured bright pattern from the overlaid same share of the cone indicating perfect local coherence (see photo in Fig. 3 (left)). Aligning both beams in a very small angle in almost horizontal direction results in fringes from the first-order interference of the single signal photons with high contrast as can be seen in the middle trace of Fig. 3. Tilting the two interferometer arms tangentially (as in Fig. 3, right) or radially with respect to each other results in a decrease of the fringe visibility. The fringe distances and directions result from the different angles in the alignments with no effect to the fringe visibility. All the following quantitative measurements are done with the best possible alignment of the two modes with respect to the detector mode as in Fig. 3 (left) and the fringe visibility was determined by moving the translation stage for longitudinal delays (∆l in Fig. 2). In Fig. 4 the single photon interference pattern of the signal photons as a function of the translation stage position (longitudinal delay) is shown while overlaying the same spot of the light cone (signal 2 is the same mode as signal 1). It was measured with the single photon detector behind the second beam splitter. In the left graph of this figure the result of a low resolution measurement is given. It allows the determination of the longitudinal coherence length l c of the measured signal photons to 83 µm (half width at half maximum of the interference signal), which is in good agreement with the bandwidth ∆λ of 2.5 nm (FWHM) of the applied spectral interference filter (l c = λ 2 /(π · ∆λ)). In the graph of Fig. 4 (right) the result of a high resolution measurement for the same alignment is given. The maximum observed visibility of the not corrected single photon measurements is 90 % in both cases. This visibility measurement is used as a reference for the following sets of experiments where one of the Mach-Zehnder interferometer arms is tilted transversally in tangential and in radial direction. The visibility for each data point was measured in the same way as in Fig. 4. The decrease of the visibility is evaluated as a function of the tilts. The results of these measurements are depicted in Fig. 5. In Fig. 5 (left) the result of the visibility evaluation is given as a function of a tangential tilt of one beam (signal 2 in Fig. 5 and Fig. 1) with respect to the fixed one (signal 1). No tilt results in maximum visibility of about 90 % as given in Fig. 5. But visibility drops rapidly if the mode of signal 2 is tilted more than half the divergence angle of the selected Gaussian beam. It has to be pronounced that both interfering beams are well within the light cone structure and thus they both have almost undiminished intensity. This can be seen in Fig. 3, right, which is a measurement with less than 40 % visibility. The tilt of the beam in radial direction results in a similar value as can be seen in For the measurement of figure 5 left the distinguishability D of the two paths of the single signal photons in the two TEM 00 -modes of signal 1 and signal 2 was measured in coincidence with the idler photon as reference (see Fig. 1 and Fig. 2). The result is given in figure 6. As expected the distinguishability is minimal in case of overlapping modes and increases as the visibility decreases. It will approach 1 if the modes are completely separated. In between the D 2 + V 2 expression has experimental values of about 0.8-0.9. The limitation below 0.9 can be understood as the result of the maximum visibility of 0.9. The two experiments of Fig. 5 (left) and Fig. 6 demonstrate the complementarity principle for the single signal photons, directly. It has to be pronounced that in all these measurement no background corrections have been applied. Theoretical description The theoretical analysis of the experimental data can be based on a simplified model as applied in [16,17] using an effective Hamiltonian with couplings a P a † S a † I and a † P a S a I , where a P , a S , and a I (a † P , a † S , and a † I ) are photon annihilation (creation) operators for the pump, signal, and idler fields. This describes the annihilation (creation) of the pump photon and the simultaneous creation (annihilation) of signal and idler photons in the SPDC 3-wave mixing process with the vacuum field contributions a S01 and a S02 . We assume perfect phase matching and restrict this analysis to spectral single field modes of frequency ω S and ω I , respectively, which is experimentally realized by the narrow spectral filter. The pump is treated as an un-depleted, classical field. From the Heisenberg equations of motion for the field operators we write the photon annihilation part of the total electric field operator for the two relevant signal field modes selected by the detector as ignoring an irrelevant multiplicative constant. C is a constant that incorporates the crystal properties and the classical pump amplitude. The strong spatial TEM 00 -mode filtering by the detector in this experiment extracts small spatial shares from the emitted field, only. Therefore in addition to the usual description as in [16,17] spatial mode functions for the two measured TEM 00 signal field modes propagating in the directions k 1,2 through the interferometer are included as U S1,2 as e.g. described in [19]. These directions are chosen by the position of the detector behind the two interferometer arms (see Fig. 2) and thus the momentum conservation and phase matching angular restrictions of the down conversion process are considered. The mode functions U S1,2 for the two TEM 00 -detectormodes are given by At the second beam splitter BS 2 the two fields are superimposed as shown in Fig. 2. In the upper arm a phase shift φ and the beam shifter for the TEM 00 -mode of signal 2 are implemented. Thus behind the second beam splitter BS 2 the electric field is given by where ∆r takes care of the tilted and thus in the reference plane shifted TEM 00 -mode (signal 2 in Fig. 1). Assuming low conversion efficiency, we retain only the lowest-order terms which is single bi-photon generation. The signal photon counting rate in the selected mode of the detector as given in Eq.2 is proportional to As result the interference visibility V at the reference plane can be calculated by the normalized cross correlation of the two here in y-direction shifted transversal field distributions of the two modes of signal 1 and signal 2 (see Fig. 1). In this equation the mode selection is represented by the value of ∆y which contains the propagation directions of the two TEM 00 -modes and thus the geometry restrictions of the involved vacuum fields, too. The beam radius w is the field radius given by the detector mode radius w D at the crystal, the propagation distance z from the crystal to the reference plane and the wavelength λ. The optical length from the crystal to the reference plane was 661 mm and thus the coherence radius of the observed mode was 2.06 mm at the reference plane. This radius is identical with the 1/ √ e-half-width of the theoretical visibility curve. This is in agreement with the fit of the experimental data of this complex measurement. In other words, the observed transversal coherence width of the single photons is as large as the width of the selected detector mode for the single photons, although the total intensity distribution is much larger. For the tangential displacements it was 2.22 mm resulting in a relative error of 7%. The theoretical result for the radial displacements shows a relative error of about 25%, which may still support the fundamental idea of this work but in contrast in the radial direction phase matching effects may play a role [7], which are not considered here. Conclusion Although the emitted light cone structure is much wider than the observed mode, the transversal coherence length of the single photons within the cone is just as large as the detected single photon mode. This result seems anti-intuitive because especially in tangential direction, along the ring structure, no symmetry breaking feature, as e.g. from the phase matching conditions, limits the transversal coherence along the ring. The emission of a single photon in each of the observed directions is equally probable. The only limiting factor to explain this experimental result is the selection of the involved modes. The analysis as it was applied in [16] shows that the 3-wave mixing process for generating the single signal photons involves besides the coherent pump light also vacuum fields. By the observed TEM 00 -detection-modes the relevant vacuum field modes are selected as TEM 00 -mode structures, as well. The tilt of one of the photon modes is associated with a tilt of the related vacuum field mode (see Eq. 3 and 5). This tilted vacuum field has a random phase compared to the non-tilted vacuum field and the visibility drops rapidly whereas the distinguishability increases. In other words with this experiment the influence of the involved vacuum fields regarding spatial coherence is observed. From this experiment it can be stated that a single photon TEM 00 mode is coherent. This is also observed with bright light in classical optics. But using single photons the distinguishability of the different photon paths could be measured, also. We measured that distinguishable other modes of this photon are not coherent to this. In experiments with higher order modes as e.g. TEM 01 [18] this analogy could be observed indirectly, too. Photons from the same coherent mode are not distinguishable and distinguishable photons belong always to different modes. From these experiments and the results of [16] it can be concluded that vacuum as under laying background for all photon generation processes may introduce general randomness in quantum optics in the spatial as well as in the temporal dimension. But by selecting coherent vacuum fields in the photon generation process we see coherent photons allowing interference experiments with visibilities of up to 1. This interplay is the physical background of the complementarity principle in quantum optics and it is the result of the mode selection in the measurement process. Declarations Acknowledgment We gratefully acknowledge P. W. Milonni for very intense and illuminating discussions and the long-lasting collaboration. We also like to thank M.W. Wilkens always having an open ear for us. For parametric down conversion a 2 mm long BBO crystal was used. The crystal was cut for collinear type I phase matching around 405 nm. The path delay between the two arms of the Mach-Zehnder-interferometer for observing the fringes and determining the visibility was realized by a motorized high resolution translation stage (∆l). The beam selection from different positions of the light cone structure as schemed in inset b was realized by a beam tilt and shift arrangement which also allowed for the perfect overlay of the selected two modes at the second beam splitter BS 2 . The detectors are single mode fiber coupled avalanche photo diodes SPCM-AQRH-13 from Perkin Elmer with lenses in front for determining the TEM 00 modes of the idler-and the two signal-fields. : Visibility as a function of the relative tilted and thus in the reference plane shifted interfering TEM 00 -modes of the single signal photons in tangential and radial direction. The solid line is the result of the best fit of the experimental data using a Gaussian distribution in agreement with the theoretical model. Figure 6: Distinguishability of the single signal photons in the two tilted and thus in the reference plane shifted TEM 00 -modes in tangential direction and measured in coincidence with the reference idler photon. The solid line is a fit of the experimental data using the result of Fig. 6 (left) as reference.
4,888.8
2015-12-18T00:00:00.000
[ "Physics" ]
Framework to Increase Knowledge Sharing Behavior among Software Engineers Aim of this study is to increase knowledge sharing among Software Engineers. Knowledge sharing is a key activity in Software Engineering field. However, increasing knowledge sharing in Software Engineering organization is not an easy task because of the lack of experience of top management in dealing with human/social/soft aspects. Previous researches in Software Engineering field have heavily focused on technical aspects rather than non-technical (human/social/soft) aspects. Therefore, a good amount of research work needs to be done to understand the social, job/work and human factors that can affect knowledge sharing. To fill this gap in research, this study proposes a framework based on non-technical aspects of Software Engineers to increase their knowledge sharing behavior. The components which are used to propose the framework includes motivation, personality traits of software engineers, job characteristics and perception towards knowledge sharing technology. Based on extensive literature review, the study suggests that motivation, personality and job characteristics have direct relationships with knowledge sharing behavior whereas perception towards technology plays a moderating role. INTRODUCTION Software engineers work on variety of projects. To make a project successful, knowledge among project members should be shared because software development is a knowledge intensive activity (Bjorson and Dingsoyr, 2008). If project members hoard their knowledge then it becomes difficult to successfully complete the project on time. This delay in timely completion of a project can increase the cost of the project and in some cases even result in the failure of the project. On-time delivery of a project is one of the criteria to measure a project's and organization's success. Knowledge sharing (transfer of knowledge from one person or group to another (El-Korany and El-Bahnasy, 2009) is, therefore, vital for all software engineers in order to perform well in their jobs. Increasing knowledge sharing among software engineers is an uphill task as most project leaders of software projects do not often have a formal training/qualification in the area of management. These professionals are normally promoted to higher posts based on their technical skills rather than management skills (Tanner, 2003;Rehman et al., 2011). This makes the process of influencing software engineers to share knowledge a rather complex task. As Software Engineering is comparatively different profession than other professions, the methods and techniques used to increase knowledge sharing from other professions may not apply to Software Engineers. This means that to increase knowledge sharing among software engineers, a study focusing purely on software engineers needs to be conducted. In delivering a project successfully, technical and non-technical aspects (human, environmental, job, personality, etc) are equally important. In fact, technical skills (or hard skills) should be blended or tailored with soft skills to make a successful project. This process of blending hard and soft skills is known as "tailoring" (Howard, 2001). In other words, hard and soft skills should complement each other (Capretz and Ahmed, 2010). As mentioned earlier by Bjorson and Dingsoyr (2008) that previous studies have predominantly focused on the technical side of Software Engineering. Recently, however, there is a growing interest about non-technical aspects (soft issues) of Software Engineering. As a human related activity, knowledge sharing can, therefore, be categorized as soft aspect (nontechnical) in Software Engineering. This study enhances the work that has been done on the soft side (non-technical) of Software Engineering by proposing a framework that focuses on factors to increase the Knowledge Sharing Behavior (KSB) of software engineers through soft aspects. As individuals are the building blocks of teams and organizations, this study focuses on individuals' KSB. The components which are integrated in the proposed framework are motivation, personality traits, job characteristics and perception towards Knowledge Sharing Technology (KST) usage and ease of use. LITERATURE REVIEW Motivation and KSB in Software Engineering: Some organizational policies have to be pursued to increase knowledge sharing among individuals (Foss et al., 2009) because individual's motivation (strength and direction of behaviour (Shafizadeh, 2007) to share knowledge cannot be taken for granted (Cabrera and Cabrera, 2002). As mentioned by Liu et al. (2010), knowledge sharing is not an outcome that will be achieved automatically. Instead, it is a capability that needs to be developed. Employees normally hesitate to share knowledge due to various reasons such as job insecurity, opportunistic behaviour of others, less reciprocal behaviour, etc. This means that in order to increase knowledge sharing, organizations may have to persuade employees to share knowledge. Due to this, knowledge sharing remains a difficult task (Lam et al., 2010). Organizations normally use different motivations in order to foster the KSB of employees. These motivations (motivators) can be extrinsic and intrinsic in nature (Galia, 2007). Extrinsic motivation is achieved through the benefits attached with the job like relationship with supervisors and financial benefits. On the other hand, intrinsic motivation is achieved directly from the task itself. In other words, a job or task 'is valued for its own sake and appears to be self sustaining'' (Deci, 1976). Although a vast literature exists on knowledge sharing, the relationship between knowledge sharing and individual motivation is largely unexplored and misunderstood (Milne, 2007). Studies on individual motivation have largely concentrated on understanding factors that lead to motivation. Several studies have been conducted to understand the motivators and de-motivators for software engineers. One of the most recent and comprehensive studies is by Beecham et al. (2008). The findings revealed the following: • Motivators and de-motivators were categorized into generic and specific. • Generic motivators include "rewards and incentives, development needs addressed, variety of study, career path, empowerment/responsibility, good management, sense of belonging, study/life balance, studying in successful company, employee participation, feedback, recognition, equity, trust/respect, technically challenging study, job security, identify with the task, autonomy, appropriate studying conditions, making a contribution and sufficient resources " (Beecham et al., 2008). • Generic de-motivators include "risk, stress, inequity, interesting study going to other parties, unfair reward system, lack of promotion opportunities, poor communication, uncompetitive pay, unrealistic goals, bad relationship with users and colleagues, poor studying environment, poor management, producing poor quality software, poor cultural fit and lack of influence" (Beecham et al., 2008). • Some motivators and de-motivators which are specific to Software Engineering include "problem solving, team studying, change, challenge (Sanatnama and Brahimi, 2010), benefits, science, experiment, development practices and software process/life-cycle. Specific de-motivator was software process/life-cycle" (Beecham et al., 2008). Based on the studies of Beecham et al. (2008) and Sharp et al. (2009), a study by Da Silva and Franca (2012) was conducted focusing on Brazilian software engineers. The study, however, have focused on the motivators aspect only. They studied the following motivators: In another study by Tanner (2003), motivators and de-motivators of engineers including software engineers were investigated. The findings revealed the following as motivators to the respondents: "Problem solving, creativity, job itself, sense of accomplishment, recognition from top technical management, opinions of employees are considered, team member with appropriate technical and personality skills, technical training and sincerity from management." De-motivators mentioned were "compensation, engineering managers and administration and overhead". The above studies, which were conducted in the field of Software Engineering, have identified the As these motivators or de-motivators are specific to software engineers meaning they are directly related to the task itself, thus they can be categorised under intrinsic form of motivation. These two kinds of motivators (intrinsic and extrinsic) play an important role to increase or decrease KSB. According to Tanner (2003), to have a long term effects, intrinsic motivation is more important than extrinsic motivation. This does not mean that extrinsic motivation is not important but a blend of both extrinsic and intrinsic motivations will be more useful rather than pursuing only one kind of strategy for software engineers. Types of motivation also vary in terms of knowledge sent or received (Foss et al., 2009) in the sense that intrinsic motivation has a positive impact on knowledge sent whereas extrinsic motivation has a negative impact on amount of knowledge sent (Foss et al., 2009). From the above review, we can infer that both extrinsic and intrinsic motivations have impact on KSB. The only difference is that intrinsic motivation is more important to make software engineers share their knowledge as compared to extrinsic motivation. Extrinsic motivation can increase KSB at a particular time but when those extrinsic rewards are removed, software engineers will decrease their knowledge sharing or will probably stop sharing at all. Figure 1 shows the relationship between motivation and KSB. Personality traits and KSB in Software Engineering: Human factors have important roles in developing software (Wang, 2009). Thus, success of software project depends not only on the right people with technical skills but also with the right personalities (Sodiya et al., 2007). Understanding the personality of software developers is as equally important as knowing their qualifications, technical skills and experience (Howard, 2001). Personality attributes affect online knowledge sharing (Hsieh and Kao, 2010) as well as offline KSB. This phenomenon of online knowledge sharing is commonly observable these days in the forms of wikis and blogs. Various measures have been used to analyze individual's personality but the two most extensively used are the Big Five personality model (Goldberg, 1990) and Myers-Briggs Type Indicator (MBTI) by Briggs and Myers (1987). Both have their own advantages and disadvantages but Big Five personality instrument has been used more widely to assess the personality of individuals (Sodiya et al., 2007). It covers most aspects of personality (Robbins, 2003) and its validity has been accepted by many scholars (Barrick and Mount, 1991;Barrick et al., 1998;John and Srivastava, 1999). Big Five personality measures Openness, Agreeableness, Conscientiousness, Neuroticism and Extraversion as personality traits. Before Big Five, such categorization of personality into various traits was not done and this is the main benefit which Big Five offers (Hsu et al., 2007). Big Five is therefore used in this study for analyzing the relationship between personality and KSB of software engineers. Figure 2 shows the relationship between Big Five personality traits and KSB. Agreeableness refers to those people who are sympathetic, good-nature and cooperative (McElroy et al., 2007). These people are altruistic as well (Goldberg, 1990). Other traits such as trust, friendly nature are also important features of agreeable people (Martínez et al., 2010). Because of their sympathetic, altruism, cooperative, trust worthy and friendly nature, they are willing to share their knowledge in order to make the project successful (Srinivasan, 2009). As found by Sodiya et al. (2007), all software engineers are high in agreeableness. Therefore, it can be assumed that software engineers are sympathetic, good-nature, cooperative and high in altruistic thus they will have higher KSB. People with Neuroticism have poor emotional stability and can easily surrender under anxiety, depression or insecurity (Martínez et al., 2010). Therefore, because of their insecure nature, they may not share their knowledge (Hsu et al., 2007). It is also reported that people with more stable personality share more knowledge (Hsieh and Kao, 2010). Sodiya et al. (2007) concluded that software engineers are low in neuroticism. This means that software engineers are emotionally stable people and they do not feel the fear of insecurity. Hence it can be concluded that as software engineers have low neuroticism level therefore they will have higher KSB. Conscientious people are responsible, dependable, organized and goal oriented. Various studies such as Konovskyand and Organ (1996) and Organ and Ryan (1995) have shown strong relationship between conscientiousness and Organization Citizenship Behaviour (OCB). OCB has a strong link with KSB (Lin, 2008) and as OCB is intrinsic in nature and leads to higher intrinsic motivation thus it can be assumed that people who are high in conscientiousness will share more knowledge. As far as software engineers are concerned, software management engineers, testers and evaluators are moderate in this personality characteristic whereas requirement engineers, designers and programmers are high in conscientiousness (Sodiya et al., 2007). Thus it can be concluded that most of the software engineers are high in conscientiousness which has a positive relationship with OCB and OCB (intrinsic motivator) has a positive impact on KSB. Hence people with conscientiousness trait will have a positive KSB because they will be intrinsically motivated to do so. The people with 'Openness to experience' as their trait are more inclined towards new experiences in life (Barrick and Mount, 1991). They are also imaginative, curious and unconventional (Martínez et al., 2010). Due to this nature, they try to gain new knowledge and based on social exchange theory, they might share their knowledge to gain new knowledge in return (Blau, 1964). According to the findings by Sodiya et al. (2007), most software engineers are low in openness to experience. This means that most software engineers are not willing to share or learn from their experiences. Thus, a lower openness to experience will cause lower level of knowledge sharing among software engineers. Extraverts are comfortable while engaging in social activities and group activities. They are active, cheerful, confident, optimistic, outgoing and passionate. They are also competent conversationalists. Results by Sodiya et al. (2007) concluded that most of the software engineer categories (software management engineers, designers, programmers and evaluators) are low in extraversion, which makes them introvert people. These introvert people are less sociable and involved in less group activities thus introvert software engineers will share less knowledge. Job characteristics and KSB in Software Engineering: Very little work has been done on the relationship between job characteristics and KSB specifically for software engineers. Although in other fields, job characteristics especially Job Characteristic Model (JCM) (Hackman and Oldham, 1980) has been widely used. The problem with JCM is that it uses only five dimensions as job characteristics (Grant, 2007) whereas more dimensions of job characteristics need to be included in future studies (Foss et al., 2009). Besides JCM, Turner and Lawrence (1965) also produced some job characteristics which were later reviewed by Hackman and Lawler (1971) and set the path for the development of Job Diagnostic Survey (JDS). This study will focus on job characteristics mentioned in Turner and Lawrence (1965) and Hackman and Oldham (1980). According to JCM, skill variety, task identity and task significance have impact on meaningfulness of work which results in high internal work motivation. As by Couger and Zawacki (1980), high internal work motivation leads to high quality work, high satisfaction with work and less absenteeism and turnover which are all outcomes. From this it can be concluded that skill variety, task identity and task significance can lead to more positive outcomes. Since KSB is also a performance related outcome (Rabbiosi et al., 2009), therefore, a positive relationship between skill variety, task identity, task significance and KSB can be predicted. Software engineers are high in skill variety due to continuous changing technology, platform and more learning opportunities from unique problem solution for every problem. Similarly all software engineers whether they are requirement engineer, design engineers, coders or testers, complete their parts of the job, so they have higher task identities. At the same time, software development itself is a significant job and has an impact on many people thus making their tasks significant. Therefore, it can be concluded that software engineers who have more skill variety, task identity and task significance will have higher KSB. Other job characteristics like autonomy and job feedback influence responsibility for outcomes and knowledge of results, respectively. Both variables, responsibility for outcomes and knowledge of results, impact the outcomes mentioned in JCM. Once again, autonomy and feedback influences the outcomes, so a positive relationship can be predicted between these two factors and KSB. Some other studies have also shown relationships between JCM components and KSB. Latham and Pinder (2005) found that higher autonomy can lead to more time for learning and development. In addition, autonomy increases the intrinsic motivation of an individual to share knowledge (Foss et al., 2009). Feedback also has been found to have a positive impact on external motivation to share knowledge (Foss et al., 2009). Apart from the dimensions mentioned by Hackman and Oldham (1980) and Hackman and Lawler (1971) also mentioned two more job characteristics which are dealing with others and friendship opportunities. Both are related with the personality traits of an individual. As we know from the discussion above, software engineers are high in introversion which means they will be hesitant when dealing with others or while going for friendship. This hesitation will lead to lower KSB among software engineers. Figure 3 shows that how job characteristics are related with KSB. Role of technology in influencing KSB among software engineers: Information can be transformed to knowledge with the help of Information Technology (IT) (Chen et al., 2009) thus making technology a critical success factor for the implementation of Knowledge Management (KM) (group of clearly defined methods or procedures (Chen et al., 2008)) and a key enabler for knowledge sharing (Davenport, 1997). Organizations can benefit through sharing knowledge among their members. This knowledge can be shared by implementing KM for which Information and Communication Technologies (ICTs) are used (Leung, 2010) thus emphasizing the role of technology for KSB. The role of technology for knowledge sharing depends on the acceptance of KST by organizational members. KST acceptance includes Perceived Usefulness (PU) and Perceived Ease of Use (PEOU). Perceived usefulness and perceived ease of use are different but connected to each other (Kim, 2008). Both these dimensions of KST moderate (as moderating variable) the relationship between the motivation and KSB. People who are motivated and perceive that KST is easy to use, it is useful for them and for their organization, it serve the purpose for which it is in place then they will use such KST more often which will result in higher KSB. KST also plays a moderating role between personality traits and KSB. Those individuals, who have sharing as their personal characteristic, may feel technology as a barrier. For example, if people are open to share their experience, (which means they are sociable and have emotionally stable personalities) they will have a positive relationship with KSB. However, if KST is too complex, not easy to use, not mature to do task(s) for what it was implemented for, not compatible with daily job routine and software engineers perceive it as not useful then even though software engineers have a knowledge sharing personality attributes, these issues of technology will negatively impact their KSB. Thus it can be said that technology plays a moderating role between personality traits and KSB. KST plays a moderating role between job characteristics and KSB as well. Job characteristics include autonomy, feedback, job complexity, task significance, dealing with others, friendship opportunities and skill variety. KST has a moderating role to play between all job characteristics and KSB. For example, people with more autonomy have more time to learn and share their knowledge but in cases of complex KST, or less perception of low usefulness by software engineers, KSB will decrease due to less usage of KST. Similarly, as was mentioned by Hurley and Green (2005), feedback, autonomy and job variety play important roles in creating a KM culture. Thus the impact of these job characteristics on KSB cannot be ignored as KSB is one of the outcomes of KM culture. If technology does not play its due role then it will be very difficult to create a KM culture. Therefore, technology plays a moderating role between job characteristics and KSB. DISCUSSION Knowledge sharing plays a key role in the success of any organization. The same goes for Software Engineering organizations. As Software Engineering is a distinct and knowledge intensive profession, therefore, the importance of knowledge sharing can never be ignored for this profession. Increasing knowledge sharing is not a simple task. To increase knowledge sharing, certain strategies needs to be followed. Those strategies include providing the right kind of motivation; hiring the right personalities; providing the right job characteristics and perceiving KST positively. Both intrinsic and extrinsic motivation plays a vital role in increasing KSB. However, intrinsic motivation has an upper hand than extrinsic motivation. Some of the studies showed even negative relationship between knowledge sharing and extrinsic motivation. Bock and Kim (2001) proved that attitude towards knowledge sharing and extrinsic motivation has a negative relationship. Extrinsically motivated people may move away from KSB in the absence of extrinsic motivation whereas people intrinsically motivated will continue their KSB even in the absence of extrinsic motivators. Despite its importance, intrinsic motivation can also affect organization negatively as intrinsically motivated people may follow their own goals and objectives to satisfy themselves as compared to following goals and objectives of the organization (Galia, 2007). Therefore, Software Engineering managers should exercise care while focusing on blending extrinsic and intrinsic motivations. Solely relying on extrinsic or intrinsic motivation will not work and can result in the decrease of efficiency and effectiveness of software engineers. Personality plays an important role while predicting the work-related outcome. As knowledge sharing is also a performance related outcome which is part of work-related outcome, thus different researchers have analyzed the relationship between KSB and personality traits. In this regard, the Big Five model of personality traits have been used many times due to its grasp on overall personality traits. This study also links the Big Five Personality traits to knowledge sharing in the context of Software Engineering. Previous studies showed that personality traits do impact KSB. For example, Gupta (2008) proved that agreeableness and conscientiousness have a positive relationship with KSB. It was also mentioned by Gupta (2008) that there is no significant relationship between KSB, openness to experience, neuroticism and extroversion. However, the authors of the current study do not agree with these findings. As openness to experiences, neuroticism and extroversion do show that a particular person who is sociable in nature and emotionally stable likes to share and learn more. Thus chances of a positive relationship between openness to experience, extroversion and KSB are higher than negative relationship. Ford (2008) concluded that job-related factors also impact KSB as was revealed by 19 out of a total of 28 respondents. These job-related factors include job characteristics. Job characteristics as defined by Turner and Lawrence (1965) and Hackman and Oldham (1980) consists of task significance, task identity, feedback, autonomy, friendship opportunities, dealing with others and skill variety. Most of these job characteristics have a positive relationship with KSB as far as software engineers are concerned because these job characteristics increase the motivation to share knowledge. However, dealing with others and friendship opportunities have negative relationship with KSB for software engineer. Technology is a critical success factor for KM implementation. Thus, to make KM implementation successful, knowledge sharing is vital. Proper KST which is perceived to be easy to use and useful helps software engineers share their knowledge if they are motivated, have the right personality and job characteristics. Based on literature review and discussion, Fig. 4 shows the proposed framework. CONCLUSION This study proposed a framework to increase the KSB of software engineers. Software Engineering industry is currently booming and is heavily dependent on knowledge. This knowledge needs to be shared among software engineers as it not only increases the performances of software engineers but also helps to complete the projects on-time. Knowledge sharing can be considered as the "jugular vein" of the Software Engineering industry. That is why, recently, researchers have increased their focus on KSB in this profession. Future studies will validate this framework by adding some more job or w design characteristics. The empirical validation is already in process as a part of a PhD study. It will be interesting to see how different Software Engineering categories behave against motivational factors and to look at what motivational factors are important for which category of software engineers. It will also be interesting to see how personality and work characteristics vary for different Software Engineering categories.
5,520.2
2014-01-27T00:00:00.000
[ "Computer Science" ]
Meshless Technique for the Solution of Time-Fractional Partial Differential Equations Having Real-World Applications Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan Department of Mathematics, Shaheed Benazir Bhutto Women University, Peshawar, Pakistan Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy Department of Mathematics, University of Swabi, Khyber Pakhtunkhwa, Pakistan Renewable Energy Research Centre, Department of Teacher Training in Electrical Engineering, Faculty of Technical Education, King Mongkut’s University of Technology North Bangkok, 1518 Pracharat 1 Road, Bangsue, Bangkok 10800, Thailand School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo 454000, China Introduction Fractional order calculus is a dynamic branch of calculus, which is concerned with the integration and differentiation of noninteger order. This branch of mathematics attracted researchers in the last few decades [1][2][3][4][5][6][7]. Fractional partial differential equations (FPDEs) are commonly used to model problems in the field of science, engineering, and many other fields including fluid mechanics, chemistry, viscoelasticity, finance, and physics. Some interesting applications of FPDEs can be found in [8][9][10][11]. Many researchers did considerable work to find the analytic solution of FPDEs [12][13][14], but it is difficult and sometimes impossible to find the analytic solution of most of FPDEs. Therefore, many researchers referred to numerical techniques to find the solution of FPDEs [15][16][17][18][19][20]. There are two widely used def-initions of fractional derivatives, namely, Caputo and Riemann-Liouville. The main difference between these two operators is the order of evaluation. Many authors analyzed time-fractional partial differential equations (PDEs), for example, Wyss [14], Agrawal [21 ], Liu et al. [22], Jiang and Ma. [23], and Chang et al. [24]. Radial basis function (RBF) method has been used to find the solution of FPDEs. RBF collocation schemes are used to find the solution of PDEs, integral equations, integrodifferential equation, etc. The main idea behind the RBF method is to approximate space derivatives by RBFs which converts PDE to a system of linear equations. The solution of this system of linear equations leads to the solution of governing equation. This method is getting fame due to its meshless nature and easy to use in high dimensions and complex geometries. To utilize this advantage of RBF scheme, it is applied to time-fractional PDEs for higher dimensions and with different types of domains. In this paper, implicit scheme (IS) and Crank-Nicolson scheme (CNS) are coupled with RBF. Many authors used meshless RBF method to solve FPDEs [25][26][27][28][29][30][31][32][33][34][35]. In this paper, multiquadric (MQ) RBF is used to approximate solution. MQ-RBF is defined by where r ij = ∥z i − z j ∥, i, j = 1, 2, ⋯, N, N is the number of collocation points and c is the shape parameter. Furthermore, z = z 1 in one dimensional, and z = ðz 1 , z 2 Þ in twodimensional case. Kansa [36] has applied the multiquadric radial basis function (MQ-RBF) collocation method to solve PDEs. After that, there are a lot of applications and developments of the MQ-RBF as an efficient meshless method to solve engineering problems. However, the ill-conditioned behaviour and the sensitivity to the shape parameter are the main obstacles in the Kansa's MQ-RBF method. Many researchers have discussed the optimal shape parameter used in the MQ-RBF [37][38][39]. Formulation of the method flows in the following major steps: (1) Approximate time-fractional derivative by using Caputo definition (2) Approximate the space variable by RBFs (3) Substitute the values obtained from the previous two steps in the problem to get a system of linear equations Definition 1. Caputo derivative of noninteger order α of a function vðz, tÞ is defined by [9] In this paper, we have tackled the following two cases of Caputo derivative: We emphasize on the following time-fractional PDE The boundary conditions (BCs) are where vðz, tÞ is the solution, ∂ α v/∂t α is the Caputo fractional derivative of order α, ψðz, tÞ is the source term, Ω is the bounded domain, and ∂Ω is the boundary. Equation (3) can be fractional diffusion, fractional wave diffusion, or fraction anomalous diffusion equation depending on LðvÞ. Here, LðvÞ is a linear operator defined by where a, b, and c are functions of z or constants and Δ and ∇ denote Laplacian and gradient operator, respectively. Main Objective of the Paper. This paper is aimed at solving FPDEs by using the combination of Caputo fractional derivative operator and RBFs. Caputo fractional derivative operator is applied to approximate the time derivative whereas RBFs are adopted to approximate the space derivatives. The organization of the rest of the paper is as follows: Section 2 is dedicated to construct meshless scheme for the first case, i.e., 0 < α < 1. In Section 3, we consider the second case, i.e., 1 < α < 2. In section 4, the numerical method is applied to different problems and comparison is made with some other methods. Section 5 is authoritative to give the concluding note of this work. Formulation of the Method for Case I In this section, we take 0 < α < 1 and find ∂ α v/∂t α , by using Caputo derivative. Finite difference scheme is applied to approximate the 1 st order time derivative appearing on the right-hand side of Caputo derivative. Then, θ-weighted scheme is applied to the governing equation, and the value of the time derivative is also substituted. 2.1. Time-Fractional Derivative. Caputo fractional derivative for α ∈ ð0, 1Þ is defined by Taking derivative at t = t n+1 , we get This implies Now, we use finite difference scheme to approximate ∂ n+1 v/∂t n+1 , as follows: Journal of Function Spaces where dt is the time step size. Then where a α = dt −α /Γð2 − αÞ and b k = ðk + 1Þ 1−α − k 1−α , k = 0, 1, ⋯, n. Finally, we can write in more precise form as where φðr ij Þ are the RBFs, k:k is Euclidian norm, and λ i ' s are the unknown constants. We can write Eq. (14) in matrix form as v n+1 = Aλ n+1 ð15Þ Provided that collocation matrix A must be nonsingular to ensure the invertibility of matrix A. This depends on the choice of RBF and the location of mesh points. Matrix A is invertible for distinct mesh points. The shape parameter has an important effect on condition number [40]. Once we find the constants λ n+1 i , we can find solution v from Eq. (14). Putting the value from Eq. (15) in Eq. (13), we get the following form Here, where g n+1 1 and g n+1 2 are some known functions given in BCs. Finally, from Eq. (18) and Eq. (16), we get We can use this scheme to find the solution at any time level t n . Journal of Function Spaces this matrix depend on the constant κ = dt/h ς , where dt is the time step, h is the distance between any two successive nodes, and ς is the order of spatial differential operator. Let us denote the exact solution of Eq. (3) by v n at time t n . We state a well-known theorem of Fasshauer [41] (see [42] for proof): provided that h χ,Ω ≤ h 0 . Here, Theorem 2. [41] Suppose Ω ⊆ ℝ s is open and bounded and satisfies an interior cone condition. Suppose Φ ∈ C 2k ðΩ × ΩÞ is symmetric and strictly conditionally positive definite of order m on ℝ s . Denote the interpolant to f ∈ N ϕ ðΩÞ on the ðm − 1Þ -unisolvent set χ by P f . Fix α ∈ ℕ s 0 with |α | ≤k. Then, there exist positive constants h 0 and C (independent of z, f , and Φ) such that Application of Theorem 2 to infinitely smooth functions such as Gaussians or generalized (inverse) multiquadrics immediately yield arbitrarily high algebraic convergence rates, i.e., for every k ∈ ℕ and |α | ≤k, we have whenever f ∈ N ϕ ðΩÞ and N ϕ ðΩÞ represent the native space of RBFs. A considerable amount of work has gone into investigating the dependence of the constant C k on k [43]. In this work, MQ-RBF is used, so it is concluded that wherev and v are the exact and approximate solution, respectively. Now let us assume that the scheme (21) is p th order accurate in space, then Let us define the residual by ε n = v∧ n − v n , then By Lax-Richtmyer definition of stability, the scheme (21) is stable if when matrix E is normal then kEk = ρðEÞ; otherwise, the inequality ρðEÞ ≤ kEk is always true. It is assumed that the step size h is to be small enough, and the solution and IC of the given problem must be sufficiently smooth. We must have dt → 0 to keep κ = dt/h p constant. Therefore, there exist some constant C such that Since the residual ε n obeys zero IC and BCs, so ε 0 = 0. So by mathematical induction, we have Hence, the scheme is convergent. Formulation of the Method for Case II In this section, we take 1 < α < 2 and find ∂ α v/∂t α by using Caputo derivative. We approximate the 2 nd order time derivative (appearing in the Caputo derivative) by the central difference scheme. We then apply θ-weighted scheme to the governing equation. 3.1. Time-Fractional Derivative. Caputo fractional derivative for α ∈ ð1, 2Þ is defined by Taking derivative at t = t n+1 , we get Journal of Function Spaces This implies Now, we approximate ∂ 2 v/∂t 2 by finite difference scheme, as follows: Repeating the same process as we did in Section 2.1, we get where Now there exist v −1 for n = 0 and k = n, for which we use the second IC, i.e., Hence, we get the following value of ∂ α v/∂t α , Proceeding in the same way as in the Section 2.2, we get v 1 and v n+1 , respectively, where Also, We can use Eq. (41) and Eq. (43) to find the solution at any time level t n for n ≥ 1. Numerical Results This section is devoted to the numerical implementation of the schemes constructed in Section 2 and Section 3. We have applied schemes over six problems including onedimensional and two-dimensional time-fractional PDEs. Problems with different types of domains and geometries are also included. We assess the accuracy of the method by taking different values of t and α. We have utilized the following error norms: Computational orders for time and space are calculated, respectively, by the formula, Figures are incorporated to show the performance of the method. We have applied IS and CNS and have compared the results. An attempt is made to apply the schemes for some nonuniform nodes including Chebyshev, random, Halton, and scattered data nodes. Also, numerical simulation is performed for some irregular domains. Moreover, we have compared our results with the results reported in [28,44]. Convergence order is calculated in all the problems, and there is uniform convergence in all the problems. where with the exact solution v z 1 , t ð Þ= t 2 sin 2πz 1 ð Þ: ð49Þ Problem 3. In the first problem, we take the following timefractional PDE [23]. IC is given as vðz 1 , 0Þ = 0 with homogeneous BCs. In Figure 1, E ∞ is plotted for different values of α. It can be observed that increasing value of α causes less accuracy of results. Also, it is clear that IS gives more accurate results than CNS. Figure 2 displays exact/approximate solution and absolute error at t = 1, c = 5:1, and dt = 0:001. In Figure 3, numerical solutions at different time levels are plotted. Table 1 is concerned with the convergence order, Journal of Function Spaces and it shows that the scheme is convergent as proved theoretically. In Table 2, we have computed the numerical results utilizing the IS and the CNS for various values of α ' s. It is observed from the table that the IS produced better results as compared to CNS. with the exact solution where IC is considered as vðz 1 , 0Þ = 0 while the BCs are vð0, tÞ = t 1+α , vð1, tÞ = et 1+α . In Table 3, IS and CNS are compared with collocation finite element scheme (CFES) [44] which indicates the admirable performance of both the schemes. Figure 4 shows the behaviour of IS scheme by plotting absolute error and approximate/exact solution. One can examine that error decays with increasing x. In Figure 5, we have plotted numerical results for different values of t. Convergence order is calculated in Table 4, and the uniform convergence is obtained. where Journal of Function Spaces with the exact solution v z 1 , t ð Þ= t 2 sin z 1 ð Þ: ð55Þ Problem 4. We consider the following problem [44]. Problem 5. In this problem, we take the following timefractional PDE [44] IC is vðz 1 , 0Þ = 0 and with BCs vð0, tÞ = 0, vðπ, tÞ = 0. Table 5 shows the performance of IS, CNS, and CFES. One can see that IS and CNS give less error than CFES. Figure 6 is aimed at showing absolute error and exact and approximate solution for α = 0:20. We can inspect that results obtained from IS scheme agree with that obtained from the Table 6, the order of convergence is calculated which shows the convergence of scheme. where with the exact solution Problem 6. In this problem, we take LðvÞ = −∂ 2 v/∂z 2 1 [28] and consider ICs are given by vðz 1 , 0Þ = 0, v t ðz 1 , 0Þ = 0 and with BCs vð0, tÞ = 0, vð1, tÞ = 0. In Table 7, E ∞ error norm is given for IS, CNS, and meshless Galerkin method (MGM) [28]. We can see that IS produces more accurate results than both CNS and MGM. Figure 8 displays E abs and relationship between exact and numerical solution. Figure 9 is devoted to plot solution at different time levels. In Tables 8 and 9, convergence orders are calculated in time and space, respec-tively. It can be seen that the method is uniformly convergent in time as well as space. where and the exact solution is with the ICs with the exact solution v z 1 , z 2 , t ð Þ= E α − 1 2 π 2 t α cos π 2 z 1 cos where Mittag-Leffler function (one parameter) is defined by We consider IC from vðz 1 , z 2 , 0Þ = cos ðπ/2z 1 Þ cos ðπ/2z 2 Þ, and the BCs are drawn from the exact solution. We have performed computations for c = 13, t = 1, dt = 0:01, and N = 30: In Figure 15, we have given an approximate solution for different values of α for uniform nodes, while in Figure 16, we have plotted E ∞ error norm for different types of nonuniform nodes for N = 100. We got reasonable accuracy for these cases as well. Problem 8. In this problem, we take the following time FPDE [45]. Conclusion In this work, an attempt is made to propose IS and CNS schemes for the solution of time-fractional PDEs. The time derivative is defined and simplified in Caputo sense and then its value is substituted in governing equation along with replacement of space derivatives by RBFs. This paper has an edge over the existing papers in the sense that in this paper, the scheme is constructed for 0 < α < 1 and 1 < α < 2 and for IS and CNS. Problems are given to show the behaviour of the method. Numerical results for different values of α are demonstrated to examine the effect of α over solution. Results produced by IS are compared with that by CNS. Results are also compared with some other methods in the literature. This comparison clearly indicates the impressive performance of our schemes. In order to utilize the advantage of RBF collocation method for nonuniform nodes and irregular domain, numerical simulation is performed and remarkable results are achieved for nonuniform nodes and irregular domain. Data Availability Data will be available on request. Conflicts of Interest The authors declare that there are no conflicts of interest associated with this publication.
3,866.8
2020-10-24T00:00:00.000
[ "Engineering", "Mathematics", "Physics" ]
A Review of Spatter in Laser Powder Bed Fusion Additive Manufacturing: In Situ Detection, Generation, Effects, and Countermeasures Spatter is an inherent, unpreventable, and undesired phenomenon in laser powder bed fusion (L-PBF) additive manufacturing. Spatter behavior has an intrinsic correlation with the forming quality in L-PBF because it leads to metallurgical defects and the degradation of mechanical properties. This impact becomes more severe in the fabrication of large-sized parts during the multi-laser L-PBF process. Therefore, investigations of spatter generation and countermeasures have become more urgent. Although much research has provided insights into the melt pool, microstructure, and mechanical property, reviews of spatter in L-PBF are still limited. This work reviews the literature on the in situ detection, generation, effects, and countermeasures of spatter in L-PBF. It is expected to pave the way towards a novel generation of highly efficient and intelligent L-PBF systems. Introduction Additive manufacturing (AM) is widely used in aerospace, medicine, jewelry, and other industries because of its rapid fabrication [1,2], low cost, and the ability to print parts with complex geometries [3,4]. Today, many developed and developing countries regard AM technology as a fifth industrial revolution and make many efforts in the development of AM. The United States Department of Defense (DoD) released the Department of Defense Additive Manufacturing Strategy [5] to stimulate the development of AM applications in national defense. Meanwhile, the Office of the Under Secretary of Defense released the first policy paper, DoD 5000. 93 Directive Use of Additive Manufacturing in the Department of Defense [6], which promoted the implementation of the AM strategy. The Ministry of AM technology as a fifth industrial revolution and make many efforts in the development of AM. The United States Department of Defense (DoD) released the Department of Defense Additive Manufacturing Strategy [5] to stimulate the development of AM applications in national defense. Meanwhile, the Office of the Under Secretary of Defense released the first policy paper, DoD 5000.93 Directive Use of Additive Manufacturing in the Department of Defense [6], which promoted the implementation of the AM strategy. The Ministry of Science and Technology of the People's Republic of China released the 2022 annual project application guide for the key projects of additive manufacturing and laser manufacturing under the 14th Five-Year National Key R&D Program [7] to establish a new standard system for AM that is consistent with international standards. Additionally, AM and laser manufacturing are two of the important tasks of the National Program for Medium-to-Long-Term Scientific and Technological Development and Made in China 2025. The EU began funding projects on AM technology as early as the first Framework Program for Research and Technological Development. Under these conditions, AM technology has advanced significantly and rapidly in developing standard systems, key technologies, and multi-industry applications. Metal AM is one of the most difficult and cutting-edge AM technologies. As shown in Figure 1, metal AM technologies can be divided into two categories, direct energy deposition (DED) and powder bed fusion (PBF) [14,15]. PBF is one of the AM technologies used to fabricate metal objects from powder feedstocks with two kinds of input energy: laser and electron [16][17][18]. In the printing process, the metal powder bed is melted by the high energy source with a designed pattern using a layer by layer printing strategy [19][20][21]. [22], subtractive manufacturing. Figure 1. Classification of metal manufacturing processes: equal-material manufacturing, additive manufacturing [22], subtractive manufacturing. Figure 1 also illustrates the forming principle of laser powder bed fusion (L-PBF), which is widely used today to rapidly manufacture parts with complicated shapes, a fine grain size, high densities, and superior mechanical properties [23,24]. Although it can currently fabricate complicated metal parts [25,26], the reliability and stability of the printing process remain inadequate [27]. There are still defects in L-PBF processing that decrease the density and affect the mechanical characteristics of the part or even result in fabrication failure. The many unresolved problems with L-PBF become a barrier to the expansion of L-PBF applications.Spatter is generated in conventional laser welding and cutting, DED, and L-PBF. Spatters are the particles ejected from a melt pool during the laser-metal interaction [28]. In conventional laser welding and cutting, the laser scanning path is relatively simple, with few overlap regions between the scanning paths, and DED has a lower scanning velocity and a larger spot than L-PBF. However, L-PBF is a powder-bed-based technology, and the printing process is more complicated than that of the three technologies mentioned above, which results in a more complex spatter behavior. Furthermore, during multi-laser L-PBF, the thermal and stress cycling, melt pool characteristics, spatter behavior, and metal vapor evolution will be definitely different from that of the single-laser PBF. The detection of spatter under multi-laser L-PBF is more difficult. For this reason, studies on L-PBF spatter are becoming very urgent. Spatter as a byproduct of L-PBF is unpreventable [29,30]. It is a detriment to the forming process, and the part and the redeposited spatters can destroy the original well-built powder layer, resulting in non-fusion defects [31,32]. Due to the uniqueness of L-PBF, the undesired effects of spatter are amplified during the layer-by-layer process. Spatter affects the subsequent re-coating and melting of the powder, resulting in internal defects in the produced part or the part failing to form. As spatter has a significant effect on L-PBF, it can be used to represent the L-PBF machining state. Spatter contains a plethora of information and can be used in various ways to analyze the manufacturing processing of L-PBF. By observing and quantifying the spatter, it is possible to establish an intrinsic correlation between spatter and the part quality, enabling a more comprehensive understanding of the L-PBF process to solve the problems of insufficient stability and reliability, allowing this technology to be popularized and applied more widely. Recently, the research concerning the spatter during L-PBF has received more and more extensive attention. In this work, we review academic publications concerning L-PBF spatter in the Web of Science database from 2015 to date (Topic: ["laser-powder bed fusion" and "spatter"] or ["selective laser melting" and "spatter"]). Figure 2 shows the trend in the number of articles on this topic over the last several years. rently fabricate complicated metal parts [25,26], the reliability and stability of the printing process remain inadequate [27]. There are still defects in L-PBF processing that decrease the density and affect the mechanical characteristics of the part or even result in fabrication failure. The many unresolved problems with L-PBF become a barrier to the expansion of L-PBF applications.Spatter is generated in conventional laser welding and cutting, DED, and L-PBF. Spatters are the particles ejected from a melt pool during the laser-metal interaction [28]. In conventional laser welding and cutting, the laser scanning path is relatively simple, with few overlap regions between the scanning paths, and DED has a lower scanning velocity and a larger spot than L-PBF. However, L-PBF is a powder-bedbased technology, and the printing process is more complicated than that of the three technologies mentioned above, which results in a more complex spatter behavior. Furthermore, during multi-laser L-PBF, the thermal and stress cycling, melt pool characteristics, spatter behavior, and metal vapor evolution will be definitely different from that of the single-laser PBF. The detection of spatter under multi-laser L-PBF is more difficult. For this reason, studies on L-PBF spatter are becoming very urgent. Spatter as a byproduct of L-PBF is unpreventable [29,30]. It is a detriment to the forming process, and the part and the redeposited spatters can destroy the original well-built powder layer, resulting in non-fusion defects [31,32]. Due to the uniqueness of L-PBF, the undesired effects of spatter are amplified during the layer-by-layer process. Spatter affects the subsequent re-coating and melting of the powder, resulting in internal defects in the produced part or the part failing to form. As spatter has a significant effect on L-PBF, it can be used to represent the L-PBF machining state. Spatter contains a plethora of information and can be used in various ways to analyze the manufacturing processing of L-PBF. By observing and quantifying the spatter, it is possible to establish an intrinsic correlation between spatter and the part quality, enabling a more comprehensive understanding of the L-PBF process to solve the problems of insufficient stability and reliability, allowing this technology to be popularized and applied more widely. Recently, the research concerning the spatter during L-PBF has received more and more extensive attention. In this work, we review academic publications concerning L-PBF spatter in the Web of Science database from 2015 to date (Topic: ["laser-powder bed fusion" and "spatter"] or ["selective laser melting" and "spatter"]). Figure 2 shows the trend in the number of articles on this topic over the last several years. Laser Powder Bed Fusion Spatter In Situ Detection Device The L-PBF detection system can be categorized as: static detection (imaging of spreading powder and deformation) and dynamic detection (characterization of melt pool, spatter, and vapor plume). The spatter generated by conventional laser welding, cutting, and DED is similar to that produced by L-PBF and is caused by the interaction between the laser and the metal material. However, L-PBF has a smaller spot (~10 1 to 10 2 µm), a smaller melt pool (up to 100 µm), a shorter lifetime (~10 ms), and a higher scanning velocity (~10 2 to 10 3 mm/s) compared to laser welding, cutting, and DED [33]. Furthermore, in L-PBF, the laser interacts with the powder bed and the metal part more than once, resulting in a greater number and variety of spatters and complicating in situ spatter detection. The laser-powder bed interaction produces the melt pool, spatter, and vapor plume (even plasma). The trajectory of the melt pool is in the plane of the laser path and can be predicted according to the strategy path, whereas the motion of the spatter is in a 3D space, and its trajectory is complex and difficult to predict. So, the detection of spatter is more difficult. Spatter can be divided into hot droplet spatter (mainly from the instability of the melt pool) and cold powder spatter (mainly driven by the vapor-induced entrainment of the protective gas). Both of them can be detected with the visible-light camera equipped with an illumination source, and the relevant collected information can be used to analyze them. According to various studies, the following methods are currently available for L-PBF spatter detection: (1) a visible-light high-speed camera, (2) X-ray video imaging, (3) infrared video imaging, and (4) schlieren video imaging. These detection techniques can detect different characteristics, as shown in Figure 3 and Table 1. Characteristics obtained from different in situ detection techniques: (a1-a3) time series snapshots taken by visible light high-speed camera (Reprinted with permission from Ref. [34]. Copyright 2019 Elsevier B.V.); (b1-b3) high-speed schlieren images during single track scans (Reprinted with permission from Ref. [35]. Copyright 2018 Springer Nature.); (c1-c3) dynamic X-ray images showing powder motion, A is the ejected powder (Reprinted with permission from Ref. [36]. Copyright 2018 Elsevier B.V.); (d1-d3) three consecutive frames of an infrared video acquired during L-PBF (Reprinted with permission from Ref. [37]. Copyright 2018 Elsevier B.V.). Figure 3. Characteristics obtained from different in situ detection techniques: (a1-a3) time series snapshots taken by visible light high-speed camera (Reprinted with permission from Ref. [34]. Copyright 2019 Elsevier B.V.); (b1-b3) high-speed schlieren images during single track scans (Reprinted with permission from Ref. [35]. Copyright 2018 Springer Nature.); (c1-c3) dynamic X-ray images showing powder motion, A is the ejected powder (Reprinted with permission from Ref. [36]. Copyright 2018 Elsevier B.V.); (d1-d3) three consecutive frames of an infrared video acquired during L-PBF (Reprinted with permission from Ref. [37]. Copyright 2018 Elsevier B.V.). There are two main methods for observing L-PBF with a high-speed visible-light camera: coaxial and off-axis. In Figure 4a, the camera shares the same optical path with the laser in a coaxial solution. In Figure 4b, the camera is placed at an angle to the optical path of the laser for viewing in Figure 4a, an off-axis solution. Visible-light high-speed camera Surface characteristics X-ray video imaging Internal structure Flow behavior of melt inside the melt pool Infrared video imaging Temperature distribution Flow behavior of gas Schlieren video imaging Gas flow propagation and distribution Visible-Light High-Speed Detector There are two main methods for observing L-PBF with a high-speed visible-light camera: coaxial and off-axis. In Figure 4a, the camera shares the same optical path with the laser in a coaxial solution. In Figure 4b, the camera is placed at an angle to the optical path of the laser for viewing in Figure 4a, an off-axis solution. Coaxial in situ detection of a commercial L-PBF machine requires extensive modification of the machine, and it is still difficult to obtain clear images because of the distance between the optical path system and the powder bed in the L-PBF machine. Another hindrance is the small optical aperture of the scanner and F-theta lens, which results in low magnification. In addition, the low reflectivity of the scanner and the low transmittance of the F-theta lens also reduce the temporal and spatial resolution of imaging, and these two characters are vital for the analyzing the trajectory and behavior of spatters. To overcome the disadvantages of coaxial in situ detection, Zhang et al. [38] improved the optical path, built segmentation algorithms, and demonstrated the algorithms' efficiency in dealing with defocused and distorted spatter images. Unlike the coaxial solution, the off-axis solution, which places the detection device at an angle to the powder bed, enables spatter detection without altering the existing L-PBF equipment, as shown in Figure 5. The system is more adaptable and simpler to alter, and because the detection system does not share the optical path of the laser, it is not constrained by the laser's original optical path and can be used to detect spatter at higher magnification and frame rates than those of the coaxial system. Due to these factors, offaxial in situ detection system is becoming increasingly popular. Coaxial in situ detection of a commercial L-PBF machine requires extensive modification of the machine, and it is still difficult to obtain clear images because of the distance between the optical path system and the powder bed in the L-PBF machine. Another hindrance is the small optical aperture of the scanner and F-theta lens, which results in low magnification. In addition, the low reflectivity of the scanner and the low transmittance of the F-theta lens also reduce the temporal and spatial resolution of imaging, and these two characters are vital for the analyzing the trajectory and behavior of spatters. To overcome the disadvantages of coaxial in situ detection, Zhang et al. [38] improved the optical path, built segmentation algorithms, and demonstrated the algorithms' efficiency in dealing with defocused and distorted spatter images. Unlike the coaxial solution, the off-axis solution, which places the detection device at an angle to the powder bed, enables spatter detection without altering the existing L-PBF equipment, as shown in Figure 5. The system is more adaptable and simpler to alter, and because the detection system does not share the optical path of the laser, it is not constrained by the laser's original optical path and can be used to detect spatter at higher magnification and frame rates than those of the coaxial system. Due to these factors, off-axial in situ detection system is becoming increasingly popular. In the case of coaxial detection, the detection equipment and external light source affect the final detection findings. Yang et al. installed a high-speed camera (pco. Dimax HS4, 3000 fps) outside the L-PBF machine at a 65° angle to the working platform to detect the spatter. Due to the little difference in brightness between the powder spatter and power bed within the view field of the high-speed camera, only the droplet spatters were detected, but not the nonmolten powder particles. Tan et al. [40] used a computational technique to analyze the obtained images, segmenting each block to extract the spatter. In the same year, Yin et al. [41] introduced an external light source (a CAVILUX ® pulsed high-power diode laser light source) and a high-speed camera (Phantom V2012) to detect the spatter and obtain clearer images. After that, this in situ detection system was used to investigate the correlation between ex situ melt track properties and in situ high-speed, high-resolution characterizations [34]. The above studies were based on the monocular camera, and the picture information collected was in a 2D space. By combining multiple cameras and using image processing arithmetic, 3D information of spatter and its mobility can be gathered. Based on the use of monocular sensors, Luo et al. [42] innovatively proposed the use of acoustic signals combined with deep learning for spatter detection, demonstrating the feasibility of the acoustic signal detection of spatter behavior. Due to the dimensional limitation of the 2D image (acquired by the monocular sensor), it is difficult to accurately calculate the behavioral information of the spatter and obtain accurate spatter trajectory, velocity, and other information. A binocular stereo detector can obtain the spatter information in two viewing angles. By using the multi-directional information, its algorithm can present the 3D trajectory and velocity of a single spatter, and the obtained information is more accurate than those of a monocular sensor. Barret et al. [43] established a stereo vision spatter detection system for spatter tracking analysis at a cost of less than USD 1000 using two slow-motion cameras (FPS1000 by The Slow Motion Camera Company), as illustrated in Figure 6. Later, Eschner et al. [44] combined two ultra-high-speed cameras with algorithms to create a 3D tracking system for measuring spatter in L-PBF. Visible in situ detection systems for L-PBF in recent years are summarized in Table 2. In the case of coaxial detection, the detection equipment and external light source affect the final detection findings. Yang et al. installed a high-speed camera (pco. Dimax HS4, 3000 fps) outside the L-PBF machine at a 65 • angle to the working platform to detect the spatter. Due to the little difference in brightness between the powder spatter and power bed within the view field of the high-speed camera, only the droplet spatters were detected, but not the nonmolten powder particles. Tan et al. [40] used a computational technique to analyze the obtained images, segmenting each block to extract the spatter. In the same year, Yin et al. [41] introduced an external light source (a CAVILUX ® pulsed high-power diode laser light source) and a high-speed camera (Phantom V2012) to detect the spatter and obtain clearer images. After that, this in situ detection system was used to investigate the correlation between ex situ melt track properties and in situ high-speed, high-resolution characterizations [34]. The above studies were based on the monocular camera, and the picture information collected was in a 2D space. By combining multiple cameras and using image processing arithmetic, 3D information of spatter and its mobility can be gathered. Based on the use of monocular sensors, Luo et al. [42] innovatively proposed the use of acoustic signals combined with deep learning for spatter detection, demonstrating the feasibility of the acoustic signal detection of spatter behavior. Due to the dimensional limitation of the 2D image (acquired by the monocular sensor), it is difficult to accurately calculate the behavioral information of the spatter and obtain accurate spatter trajectory, velocity, and other information. A binocular stereo detector can obtain the spatter information in two viewing angles. By using the multi-directional information, its algorithm can present the 3D trajectory and velocity of a single spatter, and the obtained information is more accurate than those of a monocular sensor. Barret et al. [43] established a stereo vision spatter detection system for spatter tracking analysis at a cost of less than USD 1000 using two slow-motion cameras (FPS1000 by The Slow Motion Camera Company), as illustrated in Figure 6. Later, Eschner et al. [44] combined two ultra-high-speed cameras with algorithms to create a 3D tracking system for measuring spatter in L-PBF. Visible in situ detection systems for L-PBF in recent years are summarized in Table 2. [57] * Keyhole: also known as the depression zone, is wrapped by the gas-liquid interface and penetrates through the melt pool. Invisible-Light In Situ Detection For the invisible-light in situ detection of L-PBF, the imaging technologies mainly include X-ray imaging, schlieren video imaging, infrared imaging, and thermal imaging. X-rays have a short wavelength, high energy, and high penetration ability. Highstrength X-rays can penetrate a certain thickness of metal with high temporal and spatial resolution, which is the preferred method for many L-PBF spatter studies [35]. A schematic diagram of an X-ray system is shown in Figure 7. As one of the most productive X-ray sources globally, the Advanced Photon Source (APS) in the Argonne National Laboratory provides experimental conditions for many researchers. More than 5500 researchers per year use X-rays produced by APS to do experiments. Many of those researchers use those X-rays to detect L-PBF spatter. For example, Zhao et al. [28] pioneered the use of high-speed X-rays (harmonic energy 24.4 keV) for in situ characterizations of L-PBF progress. Guo et al. [36] found transient spatter dynamics in L-PBF using a high-speed, high-resolution, and high-energy X-ray imaging technique. Ross Cunningham et al. quantified the keyhole in Ti-6Al-4V powder during laser melting based on X-ray image information [55]. Leung et al. raised the X-ray power (monochromatic X-ray power: 55 keV) and studied stainless steel (316L) and 13-93 bioactive glass. They found that melt pool wetting and vapor-driven powder entrainment are key track growth mechanisms for L-PBF [57]. A summary of X-ray in situ detection is shown in Table 2. Due to the Schlieren video imaging and infrared imaging to picture previously invisible light or materials, these two technologies are also used for the in situ detection of spatter. Schlieren video imaging, used to detect the plume in L-PBF, can visualize the invisible substance by measuring its refractive index. Bidare et al. [58] used a combination of a high-speed camera and schlieren video imaging to capture images of the denuded region, laser plume, and argon atmosphere, and explained the relation between the powder-bed denuded region and spatter. An infrared camera can collect the light emitted by an infrared light source. Ye et al. [53] used infrared cameras to detect the properties of the original plume and spatter. Grasso et al. used the plume as the information source and examined it with an infrared camera to rapidly discover processing defects and unstable states [59,60]. gress. Guo et al. [36] found transient spatter dynamics in L-PBF using a high-speed, highresolution, and high-energy X-ray imaging technique. Ross Cunningham et al. quantified the keyhole in Ti-6Al-4V powder during laser melting based on X-ray image information [55]. Leung et al. raised the X-ray power (monochromatic X-ray power: 55 keV) and studied stainless steel (316L) and 13-93 bioactive glass. They found that melt pool wetting and vapor-driven powder entrainment are key track growth mechanisms for L-PBF [57]. A summary of X-ray in situ detection is shown in Table 2. (Reprinted with permission from Ref. [28]. Copyright 2017 Springer Nature). Data Processing during Spatter Detection Spatter image obtained from in situ detection requires post-processing to enable the extraction and analysis of spatter behaviors. Spatter 2D Image Processing Algorithm Algorithms for 2D image processing are less complex than those for 3D image processing. Tan et al. [40] captured spatter images using Kalman filter tracking, segmented the images with grayscale and edge information, and obtained spatter information using fully convolutional networks and Mask R-CNN. Yin et al. [61] projected the 3D spatter trajectory into a 2D plane with image processing, used a filtering technique to improve the sharpness of the spatter image, and tracked the spatter motion information frame by frame using ImageJ. Spatter 3D Image Processing Algorithm Barrett et al. used a low-cost binocular sensor for spatter detection, laying a foundation for future analysis of the data [43]. Eschner et al. [44] used algorithms in a binocular sensor system to carry out many processes on the images, including (1) identifying particle positions and calibrating the camera system, (2) matching particles between multi-camera images, (3) determining the 3D coordinates, (4) using a priori knowledge of processes and particles to distinguish ghost particles from real particles, (5) tracking particles, and (6) processing the 3D data. Those processes require more complex algorithms to complete. Currently, they have enabled the construction of a quadruple-eye sensor system, which uses a third camera to achieve the recognition of ghost particles. However, relative to the binocular sensor detection system, the quadruple-eye sensor detection system must process a larger amount of information that is more difficult to process [45]. Full-Cycle Detection of Spatter in L-PBF During L-PBF process, the full cycle of the spatter can be divided into three stages: the initial stage (generation), the flight stage (ejection), and the fall-back stage (re-deposition). The detection of the spatter in these three stages is conductive to the deep understanding of the origin of the spatter, the correlation of the spatter and defect, and the influence of the spatter on the part. • Initial stage (generation, adjacent to the melt pool): The positions of the generation of both the cold spatters and hot spatters are adjacent to the melt pool. The ultra-highframe-rate in situ detection using a high-temporal-spatial-resolution off-axis camera combined with the illumination light source can obtain a clear morphology of spatters, which helps to reveal the mechanism of the spatter generation. • Flight stage (ejection, away from the powder bed): The amount of spatter and the ejection angle significantly affect the internal defect of the part. The spatter trajectory, ejection velocity, ejection angle, and spatter size of the spatter should be obtained to investigate the intrinsic correlation between the spatter and the defect. A long monitoring time, high-frame-rate in situ detection system, along with the laser path using multi-sensors, is applied to capture the spatter flight (even with 3D information). The high-throughput data during L-PBF process can be used for the statistics analysis of spatter characterization. In general, only hot spatters are detected in this stage to reduce the processing pressure of the monitoring system. • Fall-back stage (re-deposition, close to the powder bed): The spatter eventually redeposits on the powder bed and parts, which affect re-coating and part quality. A layer-by-layer in situ detection with a wide field-of-view and high-spatial-resolution camera can obtain high quality images of the powder and parts. The image data employing algorithms extract and confirm the size and location of the redeposited spatter, which helps in predicting the forming quality of the parts and the location of the defect. Differences In Situ Detection between Spatter and Melt Pool Due to the complexity of spatters, algorithmic requirements are higher than for melt pool detection. Generally speaking, the melt pool goes along with the laser spot and the melt pool movement is in the 2D trajectory, but the spatter movement is in the 3D trajectory, so the detection of the spatter must be extended to 3D, which requires more in situ sensors and more information needs to be processed. (1) Compared with the detection of the melt pool, the spatter, with a micro size and extensive range of motion in the 3D space, is much more difficult to be detected, which requires multiple sensors, up to four sensors, with micron spatial resolution. (2) Additionally, the melt pool is generated by the action of the laser in the metal powder bed, and its trajectory can be predicted according to the pre-defined laser path. In contrast, the trajectory of spatter is hard to be predicted due to the high-speed random motion in the 3D space, which requires sensors with a higher temporal resolution up to microseconds to detect the whole process of motion trajectory deflection. (3) The data of spatter collected using sensors with high spatial resolution and high temporal resolution are several orders larger than the data of melt pool detection. Therefore, the data processing of spatter detection is more complex, which puts higher demands on the algorithm. As a result, the observation of the spatter and data processing, is much more challenging than the detection of the melt pool. The complexity of the spatter detection algorithms is further increased by 3D detection systems with multiple sensors. Mechanism of Spatter Generation Under the interaction with a high-energy laser in L-PBF, metal powders are melted to form a melt pool when the temperature attains the melting point, then vaporized to form metal vapor or even a plasma plume when the surface temperature of the melt pool surpasses the boiling point. The different phases (solid, liquid, and vapor) significantly interact with each other during L-PBF process, among which the vapor-solid interaction and vapor-liquid interaction are the main mechanism of spatter generation. Therefore, it is necessary to investigate the mechanism of spatter generation. Spatter Classification The spatters generated in L-PBF are in a different morphology, and a variety of parameters affect spatter generation. Until now, there has been no common definition of spatter categorization. Liu et al. [62] performed L-PBF single-pass scanning experiments with 316L stainless steel powder, reflecting the dynamic behavior of spatter perpendicular to the single-track scanning direction by the high-speed imaging technology. They divided the spatter into two categories: droplet spatter and powder spatter. It is known that the spatter formation mechanism can be demonstrated as the hot spatter ejection, mainly driven by the instability of the melt pool due to the vapor-induced recoil pressure, and the cold spatter ejection, mainly driven by the vapor-induced entrainment of the protective gas. Wang et al. [63] used a high-speed camera to record the dynamic spattering process of Co-Cr alloys during L-PBF manufacturing and investigated the spatter generation mechanism in further detail. As shown in Figure 8, they recognized three major sources of spattering: recoil pressure, the Marangoni effect, and the heat effect in the melt pool. These three different sources of spattering led to three types of spattering morphologies. Micromachines 2022, 13, x FOR PEER REVIEW 11 of 42 spatter categorization. Liu et al. [62] performed L-PBF single-pass scanning experiments with 316L stainless steel powder, reflecting the dynamic behavior of spatter perpendicular to the single-track scanning direction by the high-speed imaging technology. They divided the spatter into two categories: droplet spatter and powder spatter. It is known that the spatter formation mechanism can be demonstrated as the hot spatter ejection, mainly driven by the instability of the melt pool due to the vapor-induced recoil pressure, and the cold spatter ejection, mainly driven by the vapor-induced entrainment of the protective gas. Wang et al. [63] used a high-speed camera to record the dynamic spattering process of Co-Cr alloys during L-PBF manufacturing and investigated the spatter generation mechanism in further detail. As shown in Figure 8, they recognized three major sources of spattering: recoil pressure, the Marangoni effect, and the heat effect in the melt pool. These three different sources of spattering led to three types of spattering morphologies. According to Ref. [63], there are three types of spatters: (ⅰ) The Type I spatters are associated with the extreme expansion of the gas phase. The spontaneous metal liquid flowing will occur from the high-temperature bottom of the excavation to the low-temperature sidewall and edge at the back under the Marangoni effect. (ⅱ) Then, in this process, the recoil pressure can induce the jet of low-viscosity metal liquid, and this jetted liquid metal will divide into small drops in the flight process to minimize the surface tension; therefore, the Type II spatter is formed. (ⅲ) In the printing process, some metal liquid accumulates near the spot laser, and it can be easily squeezed by the blast wave and then interrupt these non-melted particles in the front-end area; then the Type III spatter will occur at the front of the melt pool. Ly et al. [64] used a high-speed camera to explore the influence of gas flow entrainment on spatter during L-PBF. They described the entrainment phenomena of 316L stainless steel powder and Ti-6Al-4V powder layers and divided spatter into three categories. As shown in Figure 9, 60% of the ejection was due to hot entrainment ejection at velocities ranging from 6 m/s to 20 m/s: 25% was cold entrainment ejection, which occurred at a velocity of 2 m/s to 4 m/s, and 15% was droplet breakup ejection from the melt pool as a result of the recoil pressure applied at a velocity of 3 to 8 m/s. Raza et al. [65] also found that spatter from the melt pool was less than that due to vapor-induced entrainment. According to Ref. [63], there are three types of spatters: (i) The Type I spatters are associated with the extreme expansion of the gas phase. The spontaneous metal liquid flowing will occur from the high-temperature bottom of the excavation to the low-temperature sidewall and edge at the back under the Marangoni effect. (ii) Then, in this process, the recoil pressure can induce the jet of low-viscosity metal liquid, and this jetted liquid metal will divide into small drops in the flight process to minimize the surface tension; therefore, the Type II spatter is formed. (iii) In the printing process, some metal liquid accumulates near the spot laser, and it can be easily squeezed by the blast wave and then interrupt these non-melted particles in the front-end area; then the Type III spatter will occur at the front of the melt pool. Ly et al. [64] used a high-speed camera to explore the influence of gas flow entrainment on spatter during L-PBF. They described the entrainment phenomena of 316L stainless steel powder and Ti-6Al-4V powder layers and divided spatter into three categories. As shown in Figure 9, 60% of the ejection was due to hot entrainment ejection at velocities ranging from 6 m/s to 20 m/s: 25% was cold entrainment ejection, which occurred at a velocity of 2 m/s to 4 m/s, and 15% was droplet breakup ejection from the melt pool as a result of the recoil pressure applied at a velocity of 3 to 8 m/s. Raza et al. [65] also found that spatter from the melt pool was less than that due to vapor-induced entrainment. Young et al. [56] showed the characteristics and generation mechanisms of five unique types of spatter during L-PBF by in situ high-speed, high-energy X-ray video imaging: solid spatter, metallic ejected spatter, agglomeration spatter, entrainment melting spatter, and defect-induced spatter. They quantified the speed, size, and direction of metallic ejected spatter, powder agglomeration spatter, and entrainment melting spatter. The results showed that the metallic ejected spatter speed was the highest, and the size of the powder agglomeration spatter was the largest. The spatter direction was highly dependent on the characteristics of the depression zone, which was impacted directly by the metal vapor recoil pressure. Whereas the above researches had classified spatter using an in-process analysis, the following is a study that classified spatter using a post-mortem analysis. Gasper et al. [66] divided the spatter into seven categories according to the size, morphology, and other descriptors, such as oxides and agglomeration derived from SEM analysis, namely: (1) particles similar to virgin gas-atomized particles, (2) particles morphologically different from those gas-atomized, (3) larger singular particles with different morphologies, (4) particles with oxide spots, (5) particles covered with oxide, (6) small particles, and (7) agglomerates. Yang et al. [67] studied the influence of the L-PBF parameters on the pore characteristics and mechanical properties of Al-Si10-Mg parts. Three distinct types of solidified droplets were detected: hollow droplets, semi-hollow droplets, and solid droplets. Hollow droplets and semi-hollow droplets were a major source of pores inside the sample. Table 3 summarizes current studies on the categorization of spatter generation during L-PBF. (2) Particles with higher vertical momentum but originating > 2 melt pool widths away are swept into the trailing portion of the vapor jet, and ejected as cold particles; (3) particles with roughly the same vertical momentum as (2) but originating closer to the point of laser irradiation (<2 melt pool widths) are swept into or near the laser beam itself rapidly heat, and are ejected as incandescent, hot particles. (Reprinted with permission from Ref. [64]. Copyright 2017 Springer Nature). Young et al. [56] showed the characteristics and generation mechanisms of five unique types of spatter during L-PBF by in situ high-speed, high-energy X-ray video imaging: solid spatter, metallic ejected spatter, agglomeration spatter, entrainment melting spatter, and defect-induced spatter. They quantified the speed, size, and direction of metallic ejected spatter, powder agglomeration spatter, and entrainment melting spatter. The results showed that the metallic ejected spatter speed was the highest, and the size of the powder agglomeration spatter was the largest. The spatter direction was highly dependent on the characteristics of the depression zone, which was impacted directly by the metal vapor recoil pressure. Whereas the above researches had classified spatter using an in-process analysis, the following is a study that classified spatter using a post-mortem analysis. Gasper et al. [66] divided the spatter into seven categories according to the size, morphology, and other descriptors, such as oxides and agglomeration derived from SEM analysis, namely: (1) particles similar to virgin gas-atomized particles, (2) particles morphologically different from those gas-atomized, (3) larger singular particles with different morphologies, (4) particles with oxide spots, (5) particles covered with oxide, (6) small particles, and (7) agglomerates. Yang et al. [67] studied the influence of the L-PBF parameters on the pore characteristics and mechanical properties of Al-Si10-Mg parts. Three distinct types of solidified droplets were detected: hollow droplets, semi-hollow droplets, and solid droplets. Hollow droplets and semi-hollow droplets were a major source of pores inside the sample. Table 3 summarizes current studies on the categorization of spatter generation during L-PBF. Study of Droplet Spatter Ejected from "Liquid Base" of Melt Pool The melt pool is a critical feature of L-PBF. Numerous studies on the spattering from the melt pool have been done using a numerical simulation, which avoided the high cost and inefficiency of repeated experiments. Khairallah et al. [68] studied the mechanism of spatter generation at the powder scale using a 3D high-precision model. The metal vapor exerted pressure on the melt pool during L-PBF, causing the emission of liquid metal. When the liquid metal was stretched, the column grew thinner and decomposed into tiny droplets because the surface tension tended to minimize the surface energy. Additionally, it was discovered that at the start of the scanning, it was rather easy to generate large-sized back-ejected spatters [69]. They assumed that the laser scanning velocity could not be kept constant at the beginning and end of the trajectory due to inertia, resulting in a deposition of a nonuniform energy density and causing such spatters. They proposed a stability criterion to eliminate back-ejected spatter effectively. Altmeppen et al. [70] proposed a method to simulate time-dependent particles and heat ejection from the moving melt pool. This model can predict the direction and velocity of spatter emission and determine the size and temperature of a single particle by evaluating the direction and velocity of local laser scanning. In order to verify the intrinsic mechanism of the spatter generation, experiments were applied to detect the spatter using X-ray imaging and high-speed imaging. The explosion caused by the instability of the front wall of the keyhole, which resulted from the vaporization of the L-PBF volatile element, induced much droplet spatter. Zhao et al. used X-ray imaging to study the spatter behavior of Ti-6Al-4V powder during L-PBF. As illustrated in Figure 10, they demonstrated how the bulk-explosion induced by the instability of the front wall of the keyhole in the melt pool resulted in a considerable amount of droplet spatter [71]. Using in situ high-speed high-resolution imaging and thermodynamic analysis, Yin et al. investigated the vaporization and explosion behavior of alloy components in a Cu-10Zn alloy during L-PBF [72]. It was found that the explosion caused by the violent vaporization of a low boiling point also induced much droplet spatter and defects in the melt track. imaging to study the spatter behavior of Ti-6Al-4V powder during L-PBF. As illustrated in Figure 10, they demonstrated how the bulk-explosion induced by the instability of the front wall of the keyhole in the melt pool resulted in a considerable amount of droplet spatter [71]. Using in situ high-speed high-resolution imaging and thermodynamic analysis, Yin et al. investigated the vaporization and explosion behavior of alloy components in a Cu-10Zn alloy during L-PBF [72]. It was found that the explosion caused by the violent vaporization of a low boiling point also induced much droplet spatter and defects in the melt track. Using high-speed and high-resolution imaging technologies, Yin et al. [41] investigated the spatter behavior of Inconel 718 powder during L-PBF. The subthreshold ejection phenomenon was detected in which droplets emitted from the droplet column fell back to the melt pool. Later, the authors also studied the correlation between the ex situ melt track characteristics and the in situ high-speed and high-resolution characterization. They showed that the protrusion of the head of the melt trajectory was caused by the combined action of the backward flowing melt and the droplet ejection behavior in the melt pool [34]. Moreover, as illustrated in Figure 11, the melt pool first forms a depression under the action of the recoil pressure of the vapor; a high-energy laser beam impinges on the front wall of the depression, causing the surface of the front wall to quickly vaporize and generate a metal vapor that is perpendicular to this surface; the metal vapor expands and Using high-speed and high-resolution imaging technologies, Yin et al. [41] investigated the spatter behavior of Inconel 718 powder during L-PBF. The subthreshold ejection phenomenon was detected in which droplets emitted from the droplet column fell back to the melt pool. Later, the authors also studied the correlation between the ex situ melt track characteristics and the in situ high-speed and high-resolution characterization. They showed that the protrusion of the head of the melt trajectory was caused by the combined action of the backward flowing melt and the droplet ejection behavior in the melt pool [34]. Moreover, as illustrated in Figure 11, the melt pool first forms a depression under the action of the recoil pressure of the vapor; a high-energy laser beam impinges on the front wall of the depression, causing the surface of the front wall to quickly vaporize and generate a metal vapor that is perpendicular to this surface; the metal vapor expands and impacts the rear wall of the depression; finally, the spatter is formed and ejected backwards. The vertical metal vapor plume was identified as the principal reason for the melt pool spattering. Through in situ measurements of a typical forward spatter ejection angle, the vapor recoil pressure (approximately 0.46 atm) was quantified. the vapor recoil pressure (approximately 0.46 atm) was quantified. The development of various advanced in situ characterization methods provides new directions for spatter research. Wang et al. [48] used a high-speed camera to investigat the characteristics of the droplet spatter of 316L stainless steel powder during L-PBF pro cess. Gould et al. [73] reported an in situ method to analyze the L-PBF process of Ti-6A 4V and W powders by using high-speed X-ray and high-speed infrared imaging simulta neously. Combining both imaging of high-speed X-rays and high-speed infrared imaging various phenomena can be identified including 3D dynamics of melt pools, vapor plum dynamics, and spatter generation. Surface tension and evaporation both have a noticeable effect on the melt pool. Da et al. [74] studied the process parameters of the thermal behavior, fluid dynamics, an surface morphology in a melt pool using a mesoscopic simulation model. The results in dicated that the evolution of the melt pool was highly sensitive to the melt viscosity, su face tension, and recoil pressure during L-PBF. Bärtl et al. [75] investigated the ability o the aluminum alloy powder materials Al-Cr-Zr-Mn, Al-Cr-Sc-Zr, and Al-Mg-Sc-Mn-Zr t produce lightweight and high-performance structures by L-PBF. They regarded that bot the surface tension and evaporation were potentially crucial factors dominating the me dynamics, and the melt dynamics of materials with a lower surface tension and less evap oration were the most unstable. Table 4 summarizes the research on droplet spatter ejecte from the "liquid" base of the melt pool. Table 4. Summary of research on droplet spatter ejected from the "liquid base" of the melt pool. Generation Mechanism Materials References Surface tension The development of various advanced in situ characterization methods provides new directions for spatter research. Wang et al. [48] used a high-speed camera to investigate the characteristics of the droplet spatter of 316L stainless steel powder during L-PBF process. Gould et al. [73] reported an in situ method to analyze the L-PBF process of Ti-6Al-4V and W powders by using high-speed X-ray and high-speed infrared imaging simultaneously. Combining both imaging of high-speed X-rays and high-speed infrared imaging, various phenomena can be identified including 3D dynamics of melt pools, vapor plume dynamics, and spatter generation. Surface tension and evaporation both have a noticeable effect on the melt pool. Dai et al. [74] studied the process parameters of the thermal behavior, fluid dynamics, and surface morphology in a melt pool using a mesoscopic simulation model. The results indicated that the evolution of the melt pool was highly sensitive to the melt viscosity, surface tension, and recoil pressure during L-PBF. Bärtl et al. [75] investigated the ability of the aluminum alloy powder materials Al-Cr-Zr-Mn, Al-Cr-Sc-Zr, and Al-Mg-Sc-Mn-Zr to produce lightweight and high-performance structures by L-PBF. They regarded that both the surface tension and evaporation were potentially crucial factors dominating the melt dynamics, and the melt dynamics of materials with a lower surface tension and less evaporation were the most unstable. Table 4 summarizes the research on droplet spatter ejected from the "liquid" base of the melt pool. Table 4. Summary of research on droplet spatter ejected from the "liquid base" of the melt pool. Generation Mechanism Materials References Surface tension Study of Powder Spatter Ejected from "Solid Base" of Substrate Due to the entrainment effect of the gas flow, powder particles close to the laser zone of action are ejected and spattered. Ly et al. [64] performed an experimental comparison of the melt pool hydrodynamics of laser welding and L-PBF processes. In contrast to laser welding, the primary cause of spatter in L-PBF was not the laser-induced recoil pressure, but the entrainment effect of the ambient gas flow driven by the metal vapor on the microparticles. The high-speed X-ray video imaging of the defects and melt pool performed by Leung et al. [76] supported the Ly et al. hypothesis about the generation of cold and hot entrainment spatter during L-PBF. Chen et al. [77] built a multi-phase flow model to investigate the spatter generation during L-PBF. The spatter phenomena were shown to be the result of metal vapor-and ambient gas-induced entrainment, which supported the findings of Ly et al. [64]. Gunenthiram et al. [78] used high-speed camera techniques to investigate the dynamic behavior of 316L stainless steel powder and 4047 aluminum-silicon alloy powder during the generation of spatter in L-PBF. As shown in Figure 12 [61], due to the heat transfer from the surrounding powder bed, the powder particles in close contact with the front and sides of the melt pool tended to agglomerate to form larger droplets. Some of the agglomerates were subject to an entrainment gas flow, which in turn were ejected as spatter. To establish the correlation between the scanning velocity and spatter generation, Zheng et al. [51] used a high-speed camera technique to investigate the effect of the scanning velocity on the generation and evolution of the metal vapor plumes during L-PBF of 304 stainless steel powder. The results indicated that the powder spatter generations are more closely related with the stability/evolution of the vapor plume and resulting melt-track, rather than the changing of the volumetric energy density (VED). The trend of an increasing number of spatters with an increasing VED was reported by Gunenthiram et al. [78]. The droplet spatter generated at the commencement of the scan trajectory was found to be the consequence of coupling between the melt pool and the inclined metal vapor plume. Table 5 summarizes the studies of the spatter from the solid substrate ejection. Study of Spatter Generation Mechanism in Multi-Laser-PBF Fabrication Proc Recently, a multi-laser beam based on L-PBF has been applied to fulfi demand for large-sized part manufacturing in aerospace and energy fields [79] investigated the spatter behavior of Al-Si10-Mg powder during dual-be ing a high-speed camera technique. They showed that the number of o beams significantly influences the spatter creation mechanisms during the A higher number of working laser beams induces a greater recoil pressure a ing pools and ejects a larger amount of metallic material from the melt po there was no description of the interaction between the dual-beam laser an in the overlap region. The mechanism by which a dual-beam laser generates spatter is distinc a single-beam laser. Yin et al. [80] investigated the interaction between dua and the material in the overlap region during the dual-beam L-PBF of Inc powder using a high-speed, high-resolution video imaging system. They pr the spatter growth rate (rs) to quantitatively characterize the spatter beha laser powder bed fusion (ML-PBF). According to experimental observations, Yin et al. [80] believe that mos in multi-laser L-PBF is due to metal vapor-induced entrainment (ejected fr baes" of the substrate) rather than the metal vapor recoil pressure (ejected fro baes" of the melt pool). In fact, the rs in the vapor entrainment dominant order of magnitude higher than that in the unstable melt pool dominant st Study of Spatter Generation Mechanism in Multi-Laser-PBF Fabrication Process Recently, a multi-laser beam based on L-PBF has been applied to fulfil the growing demand for large-sized part manufacturing in aerospace and energy fields. Andani et al. [79] investigated the spatter behavior of Al-Si10-Mg powder during dual-beam L-PBF using a high-speed camera technique. They showed that the number of operating laser beams significantly influences the spatter creation mechanisms during the SLM process. A higher number of working laser beams induces a greater recoil pressure above the melting pools and ejects a larger amount of metallic material from the melt pools. However, there was no description of the interaction between the dual-beam laser and the material in the overlap region. The mechanism by which a dual-beam laser generates spatter is distinct from that of a single-beam laser. Yin et al. [80] investigated the interaction between dual-beam lasers and the material in the overlap region during the dual-beam L-PBF of Inconel 718 alloy powder using a high-speed, high-resolution video imaging system. They proposed to use the spatter growth rate (rs) to quantitatively characterize the spatter behavior in multi-laser powder bed fusion (ML-PBF). According to experimental observations, Yin et al. [80] believe that most of the spatter in multi-laser L-PBF is due to metal vapor-induced entrainment (ejected from the "solid baes" of the substrate) rather than the metal vapor recoil pressure (ejected from the "liquid baes" of the melt pool). In fact, the rs in the vapor entrainment dominant stages is one order of magnitude higher than that in the unstable melt pool dominant stage disturbed by the recoil pressure and the collision of the two melt pools. This proves that the entrainment effect is dominant in the cause of the multi-laser-PBF spatter, as shown in Figure 13. A summary of the studies on the mechanism of the spatter generation during an ML-PBF process is shown in Table 6. Disadvantage of Spatter Spatter is an unpreventable by-product of the complex heat transfer process between the laser and the metal powder in L-PBF [20,30,54]. Spatter brings a negative influence to the process stability and the efficiency of the energy, which reduces the quality of the manufactured object and can potentially damage the machine [68]. In accordance with the current research, the disadvantages posed by spatter in L-PBF can be classified into three categories: (1) The effect of spatter on the printing processing: spatter can affect the powder re-coating in the next layer, and reduce the energy input efficiency of the laser and the operation stability of the powder re-coating device [63,81] as well as the optical lens. (2) The effect of spatter on structure and performance: spatter is not conducive to controlling the structure (e.g., voids, roughness) and performance (e.g., tensile properties, oxygen contents) of printed parts. (3) The effect of spatter on powder recycling: recycled powder can entrain spatter particles, resulting in a significant deterioration of powder quality. The use of recycled powder for forming parts can lead to a reduction in part performance. Effect of Spatter on Printing Processing According to the generation mechanism of spatter, it can be found that spatter has a negative influence on powder re-coating and energy absorption during L-PBF processing. Effect of Spatter on Powder Re-Coating Spatter particles that redeposit onto the powder bed hinder the powder re-coating, and voids between the spatter particles and powder can induce part defects. Figure 14 shows how spatter generated during L-PBF introduces voids and internal defects in the printed part. Wang et al. [63] discovered that the re-coating powders were influenced by the spatter particles due to a small amount of spatter attached to the surface of the printed Disadvantage of Spatter Spatter is an unpreventable by-product of the complex heat transfer process between the laser and the metal powder in L-PBF [20,30,54]. Spatter brings a negative influence to the process stability and the efficiency of the energy, which reduces the quality of the manufactured object and can potentially damage the machine [68]. In accordance with the current research, the disadvantages posed by spatter in L-PBF can be classified into three categories: (1) The effect of spatter on the printing processing: spatter can affect the powder re-coating in the next layer, and reduce the energy input efficiency of the laser and the operation stability of the powder re-coating device [63,81] as well as the optical lens. (2) The effect of spatter on structure and performance: spatter is not conducive to controlling the structure (e.g., voids, roughness) and performance (e.g., tensile properties, oxygen contents) of printed parts. (3) The effect of spatter on powder recycling: recycled powder can entrain spatter particles, resulting in a significant deterioration of powder quality. The use of recycled powder for forming parts can lead to a reduction in part performance. Effect of Spatter on Printing Processing According to the generation mechanism of spatter, it can be found that spatter has a negative influence on powder re-coating and energy absorption during L-PBF processing. Effect of Spatter on Powder Re-Coating Spatter particles that redeposit onto the powder bed hinder the powder re-coating, and voids between the spatter particles and powder can induce part defects. Figure 14 shows how spatter generated during L-PBF introduces voids and internal defects in the printed part. Wang et al. [63] discovered that the re-coating powders were influenced by the spatter particles due to a small amount of spatter attached to the surface of the printed parts during stacking, and the spatter particles caused the deformation of the scraper (Figure 14a). When the redeposited spatter particles are smaller than the layer thickness, after laser scanning, the spatter particles melted completely and were metallurgically bonded to the powder and the underlying part. If the redeposited spatter particles' size exceeded the layer thickness, they did not melt completely, which induced voids between the powder and the spatter particles, as illustrated in Figure 14b. The voids remained after the scanning of the next layer, creating metallurgical defects, as illustrated in Figure 14c. Schwerz et al. [82] found the presence of spatter particles of approximately 136 µm in the cross-section of the part, illustrating how particles significantly larger than the nominal layer thickness were incorporated into the material despite recoating, and in the process, large spatter bumps of particles can cause damage to the scraper, as shown in Figure 14d. parts during stacking, and the spatter particles caused the deformation of the scraper (Figure 14a). When the redeposited spatter particles are smaller than the layer thickness, after laser scanning, the spatter particles melted completely and were metallurgically bonded to the powder and the underlying part. If the redeposited spatter particles' size exceeded the layer thickness, they did not melt completely, which induced voids between the powder and the spatter particles, as illustrated in Figure 14b. The voids remained after the scanning of the next layer, creating metallurgical defects, as illustrated in Figure 14c. Schwerz et al. [82] found the presence of spatter particles of approximately 136 µm in the cross-section of the part, illustrating how particles significantly larger than the nominal layer thickness were incorporated into the material despite recoating, and in the process, large spatter bumps of particles can cause damage to the scraper, as shown in Figure 14d. In order to detect the distribution of the re-deposition of the spatters on the build area, a long-exposure near-infrared in situ monitoring associated with image analysis was employed to determine the exact locations using the EOS EOSTATE Exposure OT system [82]. This system consists of a 5-megapixel sCMOS (scientific complementary metal-oxidesemiconductor) camera positioned on top of the build chamber and comprises the entire build platform area in its field of view. A bandpass filter of 900 nm ± 12.5 nm is placed on the camera to filter the detection of the reflected laser to avoid the detection of the environmental noise. A sample image representative of a single layer can be observed in Figure 15a, samples near the gas inlet ( Figure 15b) and gas outlet (Figure 15c) are shown separately. The long-exposure images revealed deviations in the form of high-intensity spots preferentially distributed towards the gas outlet, as in Figure 15c, the re-deposition spatter can be extracted by algorithms (Figure 15d). The spatter deposited near the gas outlet has been identified as one of the factors responsible for the rise of internal defects, which will be discussed in Section 4.2. In order to detect the distribution of the re-deposition of the spatters on the build area, a long-exposure near-infrared in situ monitoring associated with image analysis was employed to determine the exact locations using the EOS EOSTATE Exposure OT system [82]. This system consists of a 5-megapixel sCMOS (scientific complementary metaloxide-semiconductor) camera positioned on top of the build chamber and comprises the entire build platform area in its field of view. A bandpass filter of 900 nm ± 12.5 nm is placed on the camera to filter the detection of the reflected laser to avoid the detection of the environmental noise. A sample image representative of a single layer can be observed in Figure 15a, samples near the gas inlet ( Figure 15b) and gas outlet (Figure 15c) are shown separately. The long-exposure images revealed deviations in the form of high-intensity spots preferentially distributed towards the gas outlet, as in Figure 15c, the re-deposition spatter can be extracted by algorithms (Figure 15d). The spatter deposited near the gas outlet has been identified as one of the factors responsible for the rise of internal defects, which will be discussed in Section 4.2. Effect of Spatter on Energy Absorption If spatter occurs in the laser path, it might result in an inefficient use of laser energy. Several studies have been done on the influence of spatter on the energy required to melt the powder. Ferrar et al. [83] first reported on the influence of gas flow on L-PBF in 2012. They demonstrated that by-products of processing in the laser path could absorb and scatter the laser beam, inducing laser beam attenuation and the generation of a lack of fusion. Anwar et al. [84] came to a similar conclusion in the selective laser melting of Al-Si10-Mg, implying that laser energy might be squandered on spatter, as shown in Figure 16. The laser beam irradiated the spatter particles that entered the beam path and consumed a significant amount of energy, which induced the incomplete melting of the powder and defects [85]. The accumulated spatter in the powder bed inevitably consumed the energy required to melt the fresh powder [86]. Effect of Spatter on Energy Absorption If spatter occurs in the laser path, it might result in an inefficient use of laser energy. Several studies have been done on the influence of spatter on the energy required to melt the powder. Ferrar et al. [83] first reported on the influence of gas flow on L-PBF in 2012. They demonstrated that by-products of processing in the laser path could absorb and scatter the laser beam, inducing laser beam attenuation and the generation of a lack of fusion. Anwar et al. [84] came to a similar conclusion in the selective laser melting of Al-Si10-Mg, implying that laser energy might be squandered on spatter, as shown in Figure 16. The laser beam irradiated the spatter particles that entered the beam path and consumed a significant amount of energy, which induced the incomplete melting of the powder and defects [85]. The accumulated spatter in the powder bed inevitably consumed the energy required to melt the fresh powder [86]. Effect of Spatter on Energy Absorption If spatter occurs in the laser path, it might result in an inefficient use of laser energy. Several studies have been done on the influence of spatter on the energy required to melt the powder. Ferrar et al. [83] first reported on the influence of gas flow on L-PBF in 2012. They demonstrated that by-products of processing in the laser path could absorb and scatter the laser beam, inducing laser beam attenuation and the generation of a lack of fusion. Anwar et al. [84] came to a similar conclusion in the selective laser melting of Al-Si10-Mg, implying that laser energy might be squandered on spatter, as shown in Figure 16. The laser beam irradiated the spatter particles that entered the beam path and consumed a significant amount of energy, which induced the incomplete melting of the powder and defects [85]. The accumulated spatter in the powder bed inevitably consumed the energy required to melt the fresh powder [86]. Effect of Spatter on Structure and Performance Spatter causes a loss of laser energy, moreover, spatter re-deposition and oxidation also have an effect on the quality and structure of parts. A coating of oxide is generated on the spatter surface after L-PBF and the oxide layer greatly reduces the humidity of the liquid metal, which induces spheroidization [88,89]. The particles with an oxidized surface require more energy for melting and incorporation in the melt pool and in the bulk material, resulting in a lack of fusion [82]. The seriously oxidized spatter particles redeposit into the high-temperature melt pool, reversing the Marangoni convection flow direction [90,91]. Additionally, the oxidized spatter particles in the melt pool induce holes and defects [88,92]. The oxide composition of Inconel 718 spatter particles was evaluated by SEM-EDS by Gasper et al., as shown in Figure 17. In order to determine the extent of the oxidation of the spatter particles, a particle with oxide spots and fully oxidized particles were also analyzed by SEM-EDS with an in situ Focused Ion Beam (FIB), as shown in Figure 18. Effect of Spatter on Structure and Performance Spatter causes a loss of laser energy, moreover, spatter re-deposition and oxidation also have an effect on the quality and structure of parts. A coating of oxide is generated on the spatter surface after L-PBF and the oxide layer greatly reduces the humidity of the liquid metal, which induces spheroidization [88,89]. The particles with an oxidized surface require more energy for melting and incorporation in the melt pool and in the bulk material, resulting in a lack of fusion [82]. The seriously oxidized spatter particles redeposit into the high-temperature melt pool, reversing the Marangoni convection flow direction [90,91]. Additionally, the oxidized spatter particles in the melt pool induce holes and defects [88,92]. The oxide composition of Inconel 718 spatter particles was evaluated by SEM-EDS by Gasper et al., as shown in Figure 17. In order to determine the extent of the oxidation of the spatter particles, a particle with oxide spots and fully oxidized particles were also analyzed by SEM-EDS with an in situ Focused Ion Beam (FIB), as shown in Figure 18. Schwerz et al. [82] investigated the effect of spatter on parts using destructive (metallographic analysis) and non-destructive (ultrasonic inspection) methods. It was discovered that the spatter redeposits zone included numerous internal defects. Based on the results of the redeposited spatters (Figure 19a,c), the cross-section metallography of samples with high and low rates of re-deposition spatters were analyzed. No obvious internal defects were found in the area with a low spatter re-deposition rate, as shown in Figure 19b. Numerous internal defects were found in the area with a high spatter deposition rate, as shown in Figure 19d. These internal defects are observed in conjunction with round particles with a dendritic structure, indicated by white arrows in Figure 19e,f, located with inter-melt pool boundaries, i.e., lack of fusion defects. Multiple internal defects larger than 500 µm were verified by the ultrasonic inspection as the layer thickness increased. Schwerz et al. [82] investigated the effect of spatter on parts using destructive (metallographic analysis) and non-destructive (ultrasonic inspection) methods. It was discovered that the spatter redeposits zone included numerous internal defects. Based on the results of the redeposited spatters (Figure 19a,c), the cross-section metallography of samples with high and low rates of re-deposition spatters were analyzed. No obvious internal defects were found in the area with a low spatter re-deposition rate, as shown in Figure 19b. Numerous internal defects were found in the area with a high spatter deposition rate, as shown in Figure 19d. These internal defects are observed in conjunction with round particles with a dendritic structure, indicated by white arrows in Figure 19e,f, located with inter-melt pool boundaries, i.e., lack of fusion defects. Multiple internal defects larger than 500 µm were verified by the ultrasonic inspection as the layer thickness increased. Schwerz et al. [82] investigated the effect of spatter on parts using destructive (metallographic analysis) and non-destructive (ultrasonic inspection) methods. It was discovered that the spatter redeposits zone included numerous internal defects. Based on the results of the redeposited spatters (Figure 19a,c), the cross-section metallography of samples with high and low rates of re-deposition spatters were analyzed. No obvious internal defects were found in the area with a low spatter re-deposition rate, as shown in Figure 19b. Numerous internal defects were found in the area with a high spatter deposition rate, as shown in Figure 19d. These internal defects are observed in conjunction with round particles with a dendritic structure, indicated by white arrows in Figure 19e,f, located with inter-melt pool boundaries, i.e., lack of fusion defects. Multiple internal defects larger than 500 µm were verified by the ultrasonic inspection as the layer thickness increased. Spatter can cause a reduction in the tensile properties of the parts. Liu et al. [62] conducted tensile testing from fresh and contaminated 316L stainless steel powder, and the results showed that the mechanical properties of the specimens manufactured with contaminated powder are far inferior to those manufactured with fresh powder, as shown in Figure 20. Specimens with contaminated powder show considerably more voids in the fracture compared to specimens with fresh powder. These voids cause cracks and accelerate crack propagation during tensile testing, resulting in a dramatic reduction of mechanical properties in the specimens. Spatter can cause a reduction in the tensile properties of the parts. Liu et al. [62] conducted tensile testing from fresh and contaminated 316L stainless steel powder, and the results showed that the mechanical properties of the specimens manufactured with contaminated powder are far inferior to those manufactured with fresh powder, as shown in Figure 20. Specimens with contaminated powder show considerably more voids in the fracture compared to specimens with fresh powder. These voids cause cracks and accelerate crack propagation during tensile testing, resulting in a dramatic reduction of mechanical properties in the specimens. Effect of Spatter on Powder Recycling Only 2 wt.% to 3 wt.% of the powder is selected for laser melting to metal pieces during L-PBF. Therefore, powder recycling is an efficient method of extending powder use [93]. However, recycled powder contains L-PBF by-products, which causes difficulties in powder recycling. Spatter particles are distributed in various sizes, a sieving mesh can easily remove most of the particles, but a small percentage of spatters smaller than the size of the original powders still remain. The powder recycling shows a distinct impact on the L-PBF process for powders of different components. (1) The 316L stainless steel powder is unique with an inherent SiO2 oxide layer on its surface that prevents the variable valence of metallic elements. It can be used up to 15 times in L-PBF without much affecting the mechanical properties of parts, but the oxygen content of the print increases with the number of recycles, and the part density decreases after 5 to 6 recycles [94]. (2) Ti-6Al-4V also contains an oxide layer on the surface; the elemental content of the powder remains nearly the same after 31 recycles, and the tensile strength, yield strength, and elongation are also almost unchanged [95]. (3) The recyclability of Al-Si10-Mg is poor, and its oxygen content doubles after 6 recycles [96]. (4) The steel alloy 17-4 PH showed a narrowing of the particle size distribution and a loss of tensile strength after 5 recycles [97]. (5) Hastelloy X is easy to be oxidized because it contains oxygenophilic elements such as Si, Cr, and Ni. Due to the wettability of Hastelloy X powder, it produces more spatters, which affects the re-cycling of the powder. He et al. [98] found that after 6 cycles of Hastelloy, the average particle size increased by 22% and the oxygen content increased by 48%, and the part porosity increased, resulting in a reduced part quality. The following Table 7 summarizes the number of re-cycle times available for different powders. Effect of Spatter on Powder Recycling Only 2 wt.% to 3 wt.% of the powder is selected for laser melting to metal pieces during L-PBF. Therefore, powder recycling is an efficient method of extending powder use [93]. However, recycled powder contains L-PBF by-products, which causes difficulties in powder recycling. Spatter particles are distributed in various sizes, a sieving mesh can easily remove most of the particles, but a small percentage of spatters smaller than the size of the original powders still remain. The powder recycling shows a distinct impact on the L-PBF process for powders of different components. (1) The 316L stainless steel powder is unique with an inherent SiO 2 oxide layer on its surface that prevents the variable valence of metallic elements. It can be used up to 15 times in L-PBF without much affecting the mechanical properties of parts, but the oxygen content of the print increases with the number of recycles, and the part density decreases after 5 to 6 recycles [94]. (2) Ti-6Al-4V also contains an oxide layer on the surface; the elemental content of the powder remains nearly the same after 31 recycles, and the tensile strength, yield strength, and elongation are also almost unchanged [95]. (3) The recyclability of Al-Si10-Mg is poor, and its oxygen content doubles after 6 recycles [96]. (4) The steel alloy 17-4 PH showed a narrowing of the particle size distribution and a loss of tensile strength after 5 recycles [97]. (5) Hastelloy X is easy to be oxidized because it contains oxygenophilic elements such as Si, Cr, and Ni. Due to the wettability of Hastelloy X powder, it produces more spatters, which affects the re-cycling of the powder. He et al. [98] found that after 6 cycles of Hastelloy, the average particle size increased by 22% and the oxygen content increased by 48%, and the part porosity increased, resulting in a reduced part quality. The following Table 7 summarizes the number of re-cycle times available for different powders. [98] According to a study done by Marco Simonelli et al. [103], when powders are used for an extended period of time without sieving, numerous impurities mix with the powder and eventually become embedded in the surface of the manufactured part. Most of those impurities are spatter particles with the same composition as the slag produced during the conventional steel manufacturing process; the impurity consists primarily of SiO 2 and other oxides, which can lead to impurity in the composition of the powder. Even after sieving, some spatter particles remain, and printing using powders containing spatter particles easily results in defects inside the part. Wang et al. [104] discovered that during L-PBF formation of a porous structure, the spatter particles in the recycled powder became inclusions in the part, influencing the part quality. Santecchia et al. [105] found that the environmental conditions in the build chamber can lead to the rapid condensation of vaporized material, and large amounts of condensate and spatter deposited together on the powder bed can affect the reuse of the powder. High concentrations of condensate and condensate on spatter particles were found by Sutton et al. [90] by SEM imaging, as shown in Figure 21. According to a study done by Marco Simonelli et al. [103], when powders are used for an extended period of time without sieving, numerous impurities mix with the powder and eventually become embedded in the surface of the manufactured part. Most of those impurities are spatter particles with the same composition as the slag produced during the conventional steel manufacturing process; the impurity consists primarily of SiO2 and other oxides, which can lead to impurity in the composition of the powder. Even after sieving, some spatter particles remain, and printing using powders containing spatter particles easily results in defects inside the part. Wang et al. [104] discovered that during L-PBF formation of a porous structure, the spatter particles in the recycled powder became inclusions in the part, influencing the part quality. Santecchia et al. [105] found that the environmental conditions in the build chamber can lead to the rapid condensation of vaporized material, and large amounts of condensate and spatter deposited together on the powder bed can affect the reuse of the powder. High concentrations of condensate and condensate on spatter particles were found by Sutton et al. [90] by SEM imaging, as shown in Figure 21. The spatter has a negative effect on the whole process of L-PBF including the equipment (e.g., laser beam, scraper), current L-PBF manufacturing (e.g., structure and mechanical property), and subsequent L-PBF manufacturing (e.g., powder recycling). The generation of spatter will prevent the laser from directly irradiating on the powder bed, result- The spatter has a negative effect on the whole process of L-PBF including the equipment (e.g., laser beam, scraper), current L-PBF manufacturing (e.g., structure and mechanical property), and subsequent L-PBF manufacturing (e.g., powder recycling). The generation of spatter will prevent the laser from directly irradiating on the powder bed, resulting in the loss of laser energy. The redeposited spatters will damage the scraper and become inclusions in the parts, which will reduce the structure and mechanical properties of the parts. Furthermore, spattering has an influence on the whole life cycle of powder. In current manufacturing, the spatters redeposit into the powder bed, and irregularly shaped spatter particles will become inclusions in the powder, increasing the powder's oxygen concentration. These powders can result in inferior quality parts in subsequent manufacturing, leading to a decrease in the amount of powder recycling. Metal powders are more expensive than ingot metal, therefore, increasing the number of recycles of the powder is critical to making it more efficient to utilize. Spatter reduces powder quality and re-cycle times, and its removal can effectively improve powder usage efficiency, thus it is essential to research spatter countermeasures. The disadvantages of spatter are summarized in Table 8. Hastelloy X Schwerz et al. [82] Structure and mechanical property (current L-PBF manufacturing) Spatter oxidation (oxygen content of part increases due to redeposited spatters) Spatter Countermeasures The disadvantages of spatter include the equipment, components, and powders. Effective spatter countermeasures would extend equipment life, improve the parts' quality, and enhance powder use. The full cycle of the spatter can be divided into three parts: generation, ejection, and re-deposition. In the generation stage, the generation of spatter can be suppressed by optimizing the laser volumetric energy density (VED), laser beam mode, and pressure of the building chamber. During the ejection and re-deposition stages, the protective gas flow is applied to remove the spatters which are in motion above the powder bed. Process Parameters In practice, regulating process parameters has emerged as a critical topic of study in reducing spatter effects during L-PBF. Process parameters such as (VED), scanning strategy, and build chamber pressures can affect the generation of spatter as follows: (1) Adopting a large spot combined with a low volume energy density can increase the depth of the melt pool and effectively suppress spatter. (2) The Bessel beam can be employed to stabilize the melt pool and reduce the generation of spatter. (3) The pre-sintering and re-coating printing strategy can reduce spatter generation. (4) Adding helium to the protective gas, reducing its oxygen content, and increasing the build chamber pressure can reduce spatter generation. A summary of studies on the control of L-PBF process parameters to reduce spatter generation during processing is shown in Table 9. Table 9. Summary of studies on the regulation of process parameters. Process Parameters Spatter Countermeasures Materials References Laser Laser VED The laser VED affects the number and volume of spatters. The formula for calculating laser VED is E V = P Vd l h p . In the formula, P is the laser power, V is the scanning velocity, d l is the laser diameter, and h p is the layer thickness of the powder [127]. Gunenthiram et al. [78] demonstrated that the volume of spatter increased with increasing the VED, as seen in Figure 22. Mumtaz et al. [128] used pulse shaping techniques to precisely regulate the energy of the laser-material interaction zone, minimizing the generated spatter during L-PBF, which improved the top surface roughness of the parts and minimized the melt pool width. Shi et al. [115] demonstrated that by adjusting the energy density during single-layer formation, the spatter defects can be successfully reduced. The sample with the smoothest surface was produced when the linear energy density and the surface energy density was applied to 0.4 J/mm to 0.6 J/mm and 4 J/mm 2 to 6 J/mm 2 , respectively. • Laser power: The laser power applied affects the number and volume of spatters, in most situations, studies have shown that the higher the laser power input, the more severe the spatter behavior. Andani et al. [52] concluded that decreasing the laser power would reduce spatter in L-PBF, and the laser power dominates the effect on spatter generation. Chen et al. [46] demonstrated that adjusting the power intensity and distribution of the laser beam to maintain the melt pool temperature between the melting and boiling points can significantly reduce spatter generation. • Scanning velocity: The velocity of the laser scanning will affect the generation of spatter. Andani et al. [52] considered that increasing the laser scanning velocity would reduce spatter in L-PBF. Gunenthiram et al. [78] studied the number of spatters at different scanning velocities (V = 0.33~0.75 m/s) and found that the higher the scanning velocity, the less the number of hot spatters, as shown in Figure 22. However, a high scanning velocity leads to a longer scanning path, which increases the cold spatter caused by entrainment. • Laser diameter: The laser spot size during L-PBF can significantly affect the melt dynamics and droplet spatter generation [117]. There are two reasons for the variation of the spot size: passive variation and active variation. For passive changes, the lens could be deformed due to thermal expansion and contraction induced by the incident high-energy laser, so that the spot size varies during laser conduction. The active variation is to adjust the spot size of the laser artificially. Gunenthiram et al. [78] demonstrated a possible way to entirely suppress the spatter by using a large spot when the melt pool is sufficiently deep. Sow et al. [116] investigated the influence of a large laser spot on L-PBF and concluded that combining a large spot with a low VED significantly improved the L-PBF in terms of the process stability, spatter reduction, and component density. found that the number of redeposited spatters increased with the layer thickness [82]. The heat of the melt pool cannot be conducted quickly by the surrounding powder as the layer thickness rises, which leads to the instability of the melt pool, and the number of spatters increases accordingly. However, due to the limited area of laser irradiation, the increase in the spatter will slow down when the layer thickness reaches a certain thickness. Zhang et al. [38] found that spatter generation slows down when the layer thickness exceeds twice the size of the powder particles. • Laser power: The laser power applied affects the number and volume of spatters, in most situations, studies have shown that the higher the laser power input, the more severe the spatter behavior. Andani et al. [52] concluded that decreasing the laser power would reduce spatter in L-PBF, and the laser power dominates the effect on spatter generation. Chen et al. [46] demonstrated that adjusting the power intensity and distribution of the laser beam to maintain the melt pool temperature between the Laser Mode The generation of spatter is influenced by the mode of the laser beam used in L-PBF. The main modes of lasers currently used in L-PBF are: the Gaussian beam, inverse Gaussian (annular) beams, flat-top beam, and Bessel beam. Several studies have shown that Bessel beams are significantly better than Gaussian beams in L-PBF. • Gaussian beam: Less spatter would be produced while printing with L-PBF equipment that uses Bessel beams. The Gaussian beam produces more spatter and the spatter is ejected at a higher velocity, this is due to the higher recoil forces generated by the Gaussian-like thermal distribution of the laser beam on the melt pool [129]. • Inverse Gaussian (annular) beams: Compared to the Gaussian beam, the inverse Gaussian (annular) can reduce the creation of spatter and increase the geometric tolerance of the 3D parts [119]. • Flat-top beam: L-PBF with a flat-top beam generates less and slower spatter than Gaussian beam and inverse Gaussian (annular) beams, as stated by Okunkova et al. [119]. • Bessel beam: The Bessel beam helps stabilize the melt pool to reduce spatter. Nguyen et al. [118] investigated the possibility of using Bessel beams for ultrafast laser processing in AM, indicating that Bessel beams might alleviate the negative impacts of spatter in L-PBF. Tumkur et al. [129], utilizing high-speed imaging to detect the dynamics of melt pool, found that Bessel beams stabilize the melt pool's turbulence, increase their solidification times, and reduce spatter generation ( Figure 23). [82]. The heat of the melt pool cannot be conducted quickly by the surrounding powder as the layer thickness rises, which leads to the instability of the melt pool, and the number of spatters increases accordingly. However, due to the limited area of laser irradiation, the increase in the spatter will slow down when the layer thickness reaches a certain thickness. Zhang et al. [38] found that spatter generation slows down when the layer thickness exceeds twice the size of the powder particles. Laser Mode The generation of spatter is influenced by the mode of the laser beam used in L-PBF. The main modes of lasers currently used in L-PBF are: the Gaussian beam, inverse Gaussian (annular) beams, flat-top beam, and Bessel beam. Several studies have shown that Bessel beams are significantly better than Gaussian beams in L-PBF. • Gaussian beam: Less spatter would be produced while printing with L-PBF equipment that uses Bessel beams. The Gaussian beam produces more spatter and the spatter is ejected at a higher velocity, this is due to the higher recoil forces generated by the Gaussian-like thermal distribution of the laser beam on the melt pool [129]. • Inverse Gaussian (annular) beams: Compared to the Gaussian beam, the inverse Gaussian (annular) can reduce the creation of spatter and increase the geometric tolerance of the 3D parts [119]. • Flat-top beam: L-PBF with a flat-top beam generates less and slower spatter than Gaussian beam and inverse Gaussian (annular) beams, as stated by Okunkova et al. [119]. • Bessel beam: The Bessel beam helps stabilize the melt pool to reduce spatter. Nguyen et al. [118] investigated the possibility of using Bessel beams for ultrafast laser processing in AM, indicating that Bessel beams might alleviate the negative impacts of spatter in L-PBF. Tumkur et al. [129], utilizing high-speed imaging to detect the dynamics of melt pool, found that Bessel beams stabilize the melt pool's turbulence, increase their solidification times, and reduce spatter generation ( Figure 23). Printing Strategy The scanning strategy can be divided into two categories: scanning path and presintering method. The checkerboard scanning path can reduce the generation of spatter, and when the scanning direction is consistent with the gas flow direction, the spatter can be effectively removed. Pre-sintering with a low-energy density can also effectively suppress the generation of spatter. Printing Strategy The scanning strategy can be divided into two categories: scanning path and presintering method. The checkerboard scanning path can reduce the generation of spatter, and when the scanning direction is consistent with the gas flow direction, the spatter can be effectively removed. Pre-sintering with a low-energy density can also effectively suppress the generation of spatter. • Generation of spatter: Rivalta et al. [130] found that the hexagonal (outside-in verse) scanning strategy would produce more spatter. It is speculated that when hexagonal patterns are used for component manufacturing, the time between adjacent scan tracks rises, the temperature range becomes too wide, so more energy is required to heat the surrounding environment, resulting in increased spatter. A checkerboard scan approach can help to reduce the generation of spatter. • Removal of spatter: The trajectory of the spatter is dependent on the direction of the laser scan. The movement trajectory of most spatters is opposite to the scanning direction. The spatter can be effectively removed if the direction of the spatter movement is consistent with the protective gas flow. However, the gas flow direction is determined by the design of the equipment, and the optimizing of the laser scanning direction can be performed. Effective spatter removal can be achieved by changing the direction of the laser scanning so that the trajectory of the spatter is consistent with the direction of the protecting gas flow. Anwar et al. [84] found that spatters re-depositioned near the outlet of the build chamber were greatly decreased when the laser scans were against the direction of the protective gas flow, but large particle spatters were still difficult to be removed [85,120]. Pre-sintering can form necks between powder particles, which is often used in electron powder bed fusion (E-PBF) to prevent powder redistribution. Similarly, pre-sintering can be introduced into L-PBF to reduce the generation of spatters. Metal powder has a significantly higher thermal absorption rate than solid bulk metal, the amount of spatter generated during L-PBF can be reduced by using a scanning strategy of a low-energydensity laser pre-sintering [103]. Khairallah et al. [69] demonstrated that combining high laser power with pre-sintering can significantly suppress spatter generation, particularly oversized (~200 mm) back-ejected spatter (spatter in the backward direction) at the start of the scanning trajectory. Achee et al. [131] used pre-sintering to prevent spatter and denudation, and they found that the control of spatter and denudation was most effective when the pre-sintered VED was 1-4 J/mm 3 . Moreover, Annovazzi et al. [132] indicated that pre-sintering powder could help prevent spattering. Constantin et al. [133] demonstrated that adding a re-coating step can increase the part quality compared to the conventional L-PBF process. Pressure of Build Chamber The environmental pressure within the build chamber affects spatter generation. As the environmental pressure increased, the total amount of spatter dropped gradually, but the hot spatter generated by argon gas flow entrainment increased [36], and the smoothness and continuity of the built layers was degraded [35], as illustrated in Figure 24. Kaserer et al. [122] investigated the effect of pressure variation on L-PBF. They discovered that the amount of spatter produced by the pure titanium and maraging steel 1.2709 used in the study did not change considerably when the process pressure was varied between 200 mbar and atmospheric pressure. Protective Gas The influence of inert gas on spatter is due to two factors: the primary component of the gas (Ar, He, N2, 50% Ar-50% He mixture) and the secondary component of the gas (O). The inert gas' protective effect is due to its major component. Helium, which has a positive influence on spatter suppression, has a high thermal conductivity (ten times that of Argon). As a result of this high thermal conductivity, the temperature of the melt pool Based on research of the laser-powder bed interaction at sub-atmospheric pressures, Bidare et al. [121] demonstrated that while the ambient pressure decreased as gas entrainment rose, the expanding laser plume prevented the powder particles from reaching the melt pool. Li et al. [123] investigated the flow of gas, the gas-solid interaction, and the powder behavior in L-PBF at various ambient pressures. It was noted that as ambient pressure decreased, powder spatter particle and divergence angles increased, which is consistent with the Guo et al. [36] experiment results. They considered that as the ambient pressure decreased, the number of spatters grew monotonically. Spatter movement was suppressed by increasing the ambient pressure during L-PBF. Annovazzi et al. [132] demonstrated that vacuum conditions and a high laser velocity are detrimental to the stability of the powder layer, which induced more spatter. Protective Gas The influence of inert gas on spatter is due to two factors: the primary component of the gas (Ar, He, N 2 , 50% Ar-50% He mixture) and the secondary component of the gas (O). The inert gas' protective effect is due to its major component. Helium, which has a positive influence on spatter suppression, has a high thermal conductivity (ten times that of Argon). As a result of this high thermal conductivity, the temperature of the melt pool is lower and the back punch is smaller, resulting in less spatter generated. However, the rarity of Helium is the reason for its high price, in the range of about 3 to 6 times per cylinder compared to argon, so, taking this into account, there is more use of argon gas for production. Oxygen, being a tiny component of the inert gas, can cause spatter to increase and oxidize; therefore, lowering the oxygen level in the inert gas helps to suppress spatter generation. • Primary components of inert gases: Pauzon et al. [125] studied the effect of protective gas on L-PBF of Ti-6Al-4V powder in three different conditions: pure argon, pure helium, and a helium and argon mix (oxygen content was controlled at 100 ppm). In comparison to the common use of argon, studies have indicated that using pure helium or a mixture of helium and argon can reduce hot spatter by at least 60% and 30%, respectively, as shown in Figure 25. No influence of different protective gases on the number of cold spatters was detected. The study also found that adding helium to the gas can help cool spatter more quickly, which is important for limiting powder-bed degradation throughout L-PBF. Reducing the oxygen content in the build chamber is an efficient approach to prevent spatters from generating. Through multiple gas circulations, the equipment can decrease the oxygen level in the build chamber as much as feasible. Furthermore, keeping the build chamber at slightly above the atmospheric pressure can prevent the entry of oxygen from outside the equipment and, at the same time, the flowing inert gas can eliminate the gen- • Secondary component of the inert gas: According to Wu et al. [124], the oxygen concentration in the protective environment increased considerably, resulting in the generation of spatter and an increase in the oxygen content of spatter during flight. By decreasing the oxygen level of the build chamber, the spatter generation can be reduced. Reducing the oxygen content in the build chamber is an efficient approach to prevent spatters from generating. Through multiple gas circulations, the equipment can decrease the oxygen level in the build chamber as much as feasible. Furthermore, keeping the build chamber at slightly above the atmospheric pressure can prevent the entry of oxygen from outside the equipment and, at the same time, the flowing inert gas can eliminate the generated spatter. Gas Flow Strategies Most modern L-PBF equipment using gas flow removes process by-products from the process zone to enable an undisturbed process. Ladewig et al. [87] examined the influence of the protective gas flow uniformity and rate on single-laser tracks and the hatching process during the building procedure of bulk material. The efficiency of spatter removal decreased as the velocity of the protective gas flow reduced. Chien et al. [134] proposed to optimize and calibrate the inert purge airflow in an L-PBF build chamber using simulation framework methods such as coupled computational fluid dynamics (CFD) and the discrete element method (DEM). Wang et al. [126] created a full-scale geometric model to explore the interaction between the protective gas flow and the laser-induced spatter particles. The flow field was found to be steady up to a height of 30 mm above the surface of the powder bed. It was discovered that printing in this region could improve the final quality due to the consistent high-velocity flow of the protective airflow in the center of the powder bed, which removed by-products such as spatter. Equipment and Materials for L-PBF In addition to regulating process parameters, research on L-PBF equipment and materials has become a major focus for mitigating the effect of spatter. These two research areas will also contribute to the future commercialization of L-PBF technology. A summary of the research on L-PBF equipment and materials is shown in Table 10. [138,139] developed a high-gravity L-PBF system that generated a strong gravitational field by centrifugal acceleration. At a high gravity acceleration of more than 10 G, the spatters were greatly suppressed. As illustrated in Figure 26, the height of the spatter trajectory was inversely related to the increased gravitational acceleration. They noted that when a suitably strong gravitational acceleration was applied, spatter generation was dramatically suppressed. Spatter generation can be reduced by optimizing L-PBF equipment. Koike et al. [138,139] developed a high-gravity L-PBF system that generated a strong gravitational field by centrifugal acceleration. At a high gravity acceleration of more than 10 G, the spatters were greatly suppressed. As illustrated in Figure 26, the height of the spatter trajectory was inversely related to the increased gravitational acceleration. They noted that when a suitably strong gravitational acceleration was applied, spatter generation was dramatically suppressed. Philo et al. [135] used numerical simulations to investigate the interaction between the gas flow and spatter. They discovered that the parameters of the protective gas inlet and outlet in the build chamber (e.g., the radius of the inlet nozzles, the heights of the inlet and outlet) significantly affect the flow velocity, uniformity, and spatter concentration. Xiao et al. [136] simulated the flow field in an L-PBF build chamber to optimize the flowfield structure. The flow-field state was evaluated using the particle tracer method. It was shown that the flow-field distribution was made more uniform by structural optimization, which can improve the ability of the gas flow to entrain spatter. To increase the capability for spatter removal, Zhang et al. [137] proposed a novel design for the gas flow system in the build chamber, as illustrated in Figure 27. The effect of the gas flow on the solid particles was obtained using the fully coupled CFD-DPM fluid-particle interaction model. The new design increased the spatter removal rate by reducing the Coanda effect, which substantially affected the spatter removal process. In addition, another row of nozzles was added directly under the primary inlet nozzles. Philo et al. [135] used numerical simulations to investigate the interaction between the gas flow and spatter. They discovered that the parameters of the protective gas inlet and outlet in the build chamber (e.g., the radius of the inlet nozzles, the heights of the inlet and outlet) significantly affect the flow velocity, uniformity, and spatter concentration. Xiao et al. [136] simulated the flow field in an L-PBF build chamber to optimize the flowfield structure. The flow-field state was evaluated using the particle tracer method. It was shown that the flow-field distribution was made more uniform by structural optimization, which can improve the ability of the gas flow to entrain spatter. To increase the capability for spatter removal, Zhang et al. [137] proposed a novel design for the gas flow system in the build chamber, as illustrated in Figure 27. The effect of the gas flow on the solid particles was obtained using the fully coupled CFD-DPM fluid-particle interaction model. The new design increased the spatter removal rate by reducing the Coanda effect, which substantially affected the spatter removal process. In addition, another row of nozzles was added directly under the primary inlet nozzles. Current novel L-PBF machines generally use multi-laser beams to print simultaneously to increase efficiency, which generates more spatter. Optimizing the equipment, especially the build chamber, to remove spatter has become a major concern for many L-PBF machine manufacturers. SLM Solutions GmbH (Lubeck, Germany) has introduced adopting the building chamber to a high pressure in order to minimize the spatter activity, which hence has lowered spatter generation [144]. Through the streamlined special-shaped design of the flow channel, Bright Co. Ltd. (Xi'an, China) [145] reduced the vortex current at the outlet of the protective gas, and the steam plume and spatters are ensured to be blown away and not redeposit on the forming surface during forming, which solves the quality problem of the forming surface during printing. General Electric Co. invented a gas flow system for an additive manufacturing machine that uses a gas flow parallel to the powder bed to remove by-products (including spatter) from the L-PBF manufacturing process [146]. The MYSINT 100 3D printer from SISMA [147], Italy, has a stable and uniform flow field to ensure spatter removal efficiency. Current novel L-PBF machines generally use multi-laser beams to print simultaneously to increase efficiency, which generates more spatter. Optimizing the equipment, especially the build chamber, to remove spatter has become a major concern for many L-PBF machine manufacturers. SLM Solutions GmbH (Lubeck, Germany) has introduced adopting the building chamber to a high pressure in order to minimize the spatter activity, which hence has lowered spatter generation [144]. Through the streamlined specialshaped design of the flow channel, Bright Co. Ltd. (Xi'an, China) [145] reduced the vortex current at the outlet of the protective gas, and the steam plume and spatters are ensured to be blown away and not redeposit on the forming surface during forming, which solves the quality problem of the forming surface during printing. General Electric Co. invented a gas flow system for an additive manufacturing machine that uses a gas flow parallel to the powder bed to remove by-products (including spatter) from the L-PBF manufacturing process [146]. The MYSINT 100 3D printer from SISMA [147], Italy, has a stable and uniform flow field to ensure spatter removal efficiency. Research on Powder Material The physical properties and oxygen content of the powder can also contribute to differences in spatter behaviors, which can be reduced by a high viscosity, high thermal conductivity, and high density. Powders with a low oxygen content caused significantly less spatter in L-PBF. • Physical properties: High thermal conductivity and densification have a positive effect on spatter suppression. Due to the higher thermal conductivity of aluminum in the liquid state 316L, the laser energy can be rapidly dissipated into the substrate, limiting the vaporization of the aluminum alloy and the resulting spattering [78]. Gunenthiram et al. [78] pointed out that due to the densification effect, the melt pool will be located below the surface of the powder bed, which will inhibit the generation Research on Powder Material The physical properties and oxygen content of the powder can also contribute to differences in spatter behaviors, which can be reduced by a high viscosity, high thermal conductivity, and high density. Powders with a low oxygen content caused significantly less spatter in L-PBF. • Physical properties: High thermal conductivity and densification have a positive effect on spatter suppression. Due to the higher thermal conductivity of aluminum in the liquid state 316L, the laser energy can be rapidly dissipated into the substrate, limiting the vaporization of the aluminum alloy and the resulting spattering [78]. Gunenthiram et al. [78] pointed out that due to the densification effect, the melt pool will be located below the surface of the powder bed, which will inhibit the generation of spatter. The melt pools formed by the laser irradiation of different powder particles have varying viscosities which influence the generation of spatter. Leung et al. [140] investigated the laser-material interaction of 316L stainless steel powder and 13-93 bioactive glass powder during L-PBF at short time scales. The results indicate that droplet spatters are easily generated in a low-viscosity melt (e.g., 316L) because of the strong Marangoni-driven flow. By contrast, a high-viscosity melt (e.g., 13-93 bioactive glass) reduces spatter generation by dampening the Marangonidriven flow. • Oxygen content: For the raw powder used in L-PBF, the higher the oxygen content, the greater the melt pool instability and the greater the probability of spattering. Fedina et al. [143] found that with the oxygen content of the powder rose, the number of spatters increased, whereas the other chemical elements remained relatively constant. They suggested that the increase in oxygen might have affected the powder spattering. Additionally, an increase in the powder oxygen content led to an increase in the oxygen content of the melt pool, which in turn affected the flow behavior of the fluid in the melt pool, leading to spattering as the melt pool broke into molten droplets [148]. Fedina et al. [142] investigated L-PBF dynamics and powder behavior by comparing water-atomized and gas-atomized powders. They discovered that the water-atomized powder had more frequent spatter ejection and speculated that the higher oxygen level in the powder caused the melt pool to become unstable, resulting in an excessive number of spatters. Manufacturers are also concentrating their efforts on developing powder materials suitable for L-PBF, offering a wide variety of powder materials such as various titanium alloys, nickel alloys, aluminum alloys, and cobalt-chromium alloy powder materials for the aerospace, automotive, and biomedical fields. Conclusions This paper reviews the literature on the in situ detection, generation, effects, and countermeasures against spatter in L-PBF. The main points of this review are summarized as the following: (1) In situ detection system for spatter during L-PBF: The detection methods are based on the physical properties (trajectory and brightness) of the spatter and melt pool. The variances in the trajectory and brightness lead to differences in the sensors and light sources of the detection system. • Sensor: Due to the complex and unpredictable trajectories of the spatters in the 3D space compared to the melt pool, detection requires multiple sensors and sophisticated algorithms. A 3D detection solution with a quadruple-eye sensor combined with algorithms has been applied in a visible-light detection system. The emergence of 3D detection solutions provides more information in three dimensions, which improves the accuracy of the spatter detection. • Light source: Compared to the bright high-temperature melt pool, the spatters consist of both bright hot droplet spatters and dark cold powder spatters. The motion of dark cold powder spatter can hardly be captured without an external light source. Therefore, a visible light source must be applied to enable the detecting of two types of spatters. (2) Mechanism of spatter generation in L-PBF: spatter can be divided into droplet spatter from the "liquid base" of the melt pool and powder spatter from the "solid base" of the substrate. • Droplet spatter from the "Liquid base" of the melt pool: The droplet spatter originates from the instability of the melt pool. The Marangoni effect and the metal vapor recoil pressure generated on the surface of the melt pool lead to the spatter ejection from "liquid base" of the melt pool. • Powder spatter from the "Solid base" of the substrate: Powder spatter is induced by the entrainment effect of the ambient gas flow driven by the metal vapor. A low-pressure area is generated near the high-speed moving metal vapor, and the surrounding inert protective gas will be "entrained" to the vicinity of the melt pool, driving the powder spatter to be ejected from the "solid base" of the substrate. (3) Spatter effects during L-PBF: Spatter has negative effects not only on the equipment and quality of parts, but also on the whole life cycle of the powder. Therefore, spatter significantly affects both the current L-PBF manufacturing and the subsequent L-PBF manufacturing. • Equipment: the laser light path will be obstructed by the ejected spatter, and the scraper will be damaged by the redeposited spatter. • Current L-PBF manufacturing: redeposited spatter can cause deterioration in the part structure and mechanical property. • Subsequent L-PBF manufacturing: the spatters redeposit into the powder bed to be inclusions, resulting in a decrease in the quality of the re-cycle powder and affecting the subsequent L-PBF manufacturing. (4) Countermeasures for spatter in L-PBF: for the full cycle of spatter (generationejection-redeposition), the countermeasures for spatter are divided into spatter generation suppression and spatter removal. • Spatter generation suppression: the generation of spatter can be suppressed by optimizing the laser volumetric energy density (e.g., raising the scanning velocity, lowering the laser power, decreasing the layer thickness, and increasing the laser spot), laser beam mode (Bessel beams), and pressure of the building chamber. • Spatter removal efficiency: The gas flow removes process by-products from the process zone to enable an undisturbed process. Simulation framework methods (CFD and DEM) and a full-scale geometric model are employed to optimize the flow filed structure. A high-velocity gas flow under a certain value (counter-Coanda effect) applied in the center of the powder bed greatly improves the efficiency of spatter removal. Future Research Directions As the main technology in metal AM, L-PBF is evolving toward a greater efficiency, precision, speed, and fabrication of large-sized parts. However, spattering has caused negative influence on the product quality during L-PBF. The following trends characterize the directions of research on L-PBF spatter behavior: (1) Study of spatter behavior under multiple lasers: Multi-laser synergy has been the main solution to achieve more efficient fabrication of large-sized parts. However, the mechanism of spatter becomes more complicated due to the enhancement of metal vapor, the Marangoni effect, and entrainment under the multi-laser interaction. Additionally, each laser induces both "liquid-based" and "solid-based" ejected spatters, and the amount of spatter increases dramatically using multiple lasers. The spatter is more difficult to be removed by gas flow due to the large-scaled build chamber. Therefore, the research of spatter in multi-beam manufacturing has become more urgent. (2) Improving the quality of in situ spatter detection: The combination of a visiblelight high-speed camera and X-ray imaging technology in spatter detection coincides with the development trend of spatter detection [149]. The combination of the two methods enables us to study spatter behaviors from the inside (melt pool) to the outside (powder bed), and gain more information on the behaviors of the spatter. The multi-sensor system is indispensable in the research of spatter and the number of sensors can be expanded based on the existing quadruple-eye sensor. (3) Information processing using artificial intelligence: The data volume of the multisensor system could exponentially increase with the addition of data sources such as temperature, radiant intensity, light intensity information, acoustic signals, and images of melt pools and spatters. Therefore, machine learning (supervised, semisupervised, unsupervised) is necessary for the efficient processing of the multi-source and heterogeneous data. (4) Countermeasures for spatter: At present, simulations are commonly used to study the countermeasures of spatter, and the raw data used in the simulations come from their detection. Improving the comprehensiveness and accuracy of detection information is conducive to the actual application of the simulation of spatter countermeasures. (5) Commercial L-PBF equipment: Several companies (e.g., Concept laser, EOS, SLM solutions) have developed systems for detecting melt pools during L-PBF manufacturing, but there is still a lack of spatter detection in the equipment. As a result of the complex spatter behaviors and serious negative impact in L-PBF, it is necessary to remove as much of the spatter as possible by using dynamical control of the pro-tective gas flow field. The addition of an in situ spatter detection system enables the dynamical feedback of the control of the gas flow field.
25,386.2
2022-08-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Measurement of carrier lifetime in micron-scaled materials using resonant microwave circuits The measurement of minority carrier lifetimes is vital to determining the material quality and operational bandwidth of a broad range of optoelectronic devices. Typically, these measurements are made by recording the temporal decay of a carrier-concentration-dependent material property following pulsed optical excitation. Such approaches require some combination of efficient emission from the material under test, specialized collection optics, large sample areas, spatially uniform excitation, and/or the fabrication of ohmic contacts, depending on the technique used. In contrast, here we introduce a technique that provides electrical readout of minority carrier lifetimes using a passive microwave resonator circuit. We demonstrate >105 improvement in sensitivity, compared with traditional photoemission decay experiments and the ability to measure carrier dynamics in micron-scale volumes, much smaller than is possible with other techniques. The approach presented is applicable to a wide range of 2D, micro-, or nano-scaled materials, as well as weak emitters or non-radiative materials. T raditionally, direct electrical measurements of minority carrier lifetimes are made using direct current (DC) photoconductive decay (PCD) measurements, whereas noninvasive, contact-free lifetime measurements are made using time-resolved microwave reflectance (TMR), or time-resolved photoluminescence (TRPL). In TRPL, a short laser pulse optically excites a light-emitting material and the resulting photoluminescence (PL) is then collected as a function of time; carrier lifetimes are then extracted from the temporal decay of the emitted PL 1 . However, in addition to the pulsed laser, TRPL also requires collection optics and a detector that have both been tailored to the wavelength of light emitted by the sample. Hence, oftentimes entirely different optics and photodetectors are required for samples emitting in different regions of the electromagnetic spectrum. In addition, TRPL often requires tradeoffs in the choice of detectors, balancing detector sensitivity (allowing for measurement of weakly emitting or small samples) and detector response times (allowing for measurement of shorter carrier lifetimes), with more sensitive detection almost always accompanied by poorer time resolution. Alternatively, small samples or weak emitters can be measured using the PCD 2,3 technique, which records the temporal dependence of a sample's DC conductivity following pulsed excitation. However, such an approach requires fabrication of ohmic contacts to the material under test and thus increased time and expense associated with the contact patterning and metal deposition. The processes for contact formation can vary depending on the material of interest and may not be feasible for materials or structures such as polymers, organic dyes, nanowires, or micron-or nano-scale two-dimensional (2D) materials. The non-invasive, contact-free analog to PCD is TMR, which records the time evolution of the microwave reflection from a sample following optical excitation, essentially probing a change in the sample's radio frequency (RF) conductivity 4,5 . Compared with TRPL, TMR has the advantage of improved sensitivity; furthermore, as it is the photo-conductivity that is probed, the sample does not need to emit and there is no need for wavelength-tailored collection optics or a high-speed optical detector. The free-space nature of conventional TMR results in sensitivity to carrier concentrations over large areas (up tõ 1 cm 2 ), which can complicate analysis or require large diameter optical pump beams when uniform carrier density profiles are desired, as is the case when extracting Auger or radiative coefficients of a material 6 . For micron-scaled samples, the mismatch between the large RF probe and sample area will significantly decrease the TMR signal-to-noise ratio (SNR), as only a small fraction of the reflected RF signal is modulated by the change in the micro-scale sample's carrier concentration. Similar difficulties would be encountered for TMR measurements of photoexcited carriers in 2D materials, as exfoliation processes typically result in approximately micron-scale flakes of 2D materials 7 . The interaction of RF signals with optoelectronic materials can be scaled and concentrated to smaller volumes by using a microstrip split-ring resonator (SRR), which, on resonance, provides strong spatial localization of the RF field 8,9 . Comparable structures have been employed in previous RF SRR-sensing demonstrations, which measure changes in the resonator's RF response, although such an approach was used to investigate large-area patterns and on relatively slow (~s) timescales in the ultraviolet 10,11 . Here we present a technique for measuring nanosecond timescale carrier dynamics in small volumes of optoelectronic materials: micro-scale time-resolved microwave resonator response (µ-TRMRR), shown schematically in Fig. 1. Micronscale infrared (IR) pixel elements of a semiconductor material under test are placed in the split gap of an SRR coupled to a microstrip busline, and when photoexcited, alter the RF circuit's S 21 parameter (the ratio of the forward traveling voltage waves at port 1 and port 2). Measuring the time evolution of the resonator's S 21 parameter, at resonance, allows for electrical readout of carrier dynamics in the sample. By using small Ku band (12)(13)(14)(15)(16)(17)(18) resonators and driving the circuit at a single resonant frequency, we are able to effectively characterize high-speed (~ns) carrier dynamics in micron-scale mid-IR materials, something not practically achievable using traditional contact-free lifetime measurement techniques or investigated in previous RF resonator measurements. Specifically, we measure the photoexcited carrier lifetimes in 24 µm × 24 µm pixels of the narrow bandgap III-V material InAsSb, which is of practical interest for mid-wave IR (MWIR) detector applications [12][13][14][15] . The same material is measured using TRPL as well as large-area TMR measurements, and we compare the extracted lifetimes from the µ-TRMRR, TMR, and TRPL approaches. We demonstrate a five order of magnitude improvement in the measurement sensitivity compared with TRPL and the ability to extract carrier lifetimes with photoexcitation as weak as 35 fJ incident excitation pulse energy upon the pixel at room temperature. The presented technique offers an extremely sensitive and high-speed (~ns) approach for measuring carrier lifetimes in small volumes of any number of optoelectronic materials. Results Optical characterization. Figure 2 shows the temperaturedependent PL and TRPL from bulk, as-grown InAsSb and an Fig. 1 Overview of the proposed technique. A radio frequency (RF) source outputs a continuous wave (CW) microwave signal at the resonant frequency of the split-ring resonator (SRR) through port 1. A pulsed laser excites electron-hole pairs (EHPs) in the material under study (an indium arsenide antimonide pixel in this case) loaded within the split gap of the resonator. The EHPs modulate the CW signal on the microstrip busline, whose envelope function is detected by a Schottky diode RF detector. The modulated signal is then sent to a high-speed oscilloscope synchronized to the laser repetition rate. A micrograph shows both the pixel and a thin layer of insulating hexagonal boron nitride (hBN) loaded into the SRR; the scale bar in the image is 20 μm array of InAsSb pixels transferred to thermal release tape. The TRPL is measured using a fast HgCdTe (mercury cadmium telluride, MCT) detector, as described in the Methods section. Strong MWIR PL is observed from both the as-grown and pixel array of the narrow-bandgap InAsSb, particularly at low temperature; furthermore, as the temperature increases, the PL intensity decreases (from increased non-radiative effects) and red shifts (from a decrease in the InAsSb bandgap). TRPL measurements on the bulk material ( Fig. 2c) give low-injection minority carrier lifetimes varying from 360 to 194 ns as temperature increases from 77 to 300 K. Figure 2d shows the TRPL from the large-area pixel array on thermal release tape. The pixel TRPL lifetime measurement shows a similar temperature dependence as the bulk InAsSb sample, with lifetimes ranging from 381 to 112 ns with increasing temperature. The pixel TRPL signal ( Fig. 2d) can only be observed from a large-area array of pixels, as the emission from a single pixel is too weak to be time resolved. The benefits of the MCT detector used for these TRPL measurements include response out to wavelengths of~12 μm and a time constant of~4 ns. However, the faster response time of the MCT comes with a significant sensitivity penalty, making detection of weak emission challenging. As demonstrated in the 77 K data of Fig. 2c, the choice of fitting interval strongly determines the lifetime extracted from TRPL. The low-injection condition (δn = δp < n 0 ), where the photoexcited excess carrier concentrations (δn = δp) are less than the background doping (n 0 ), is the regime of greatest interest for detector material characterization, as it defines a region of linear detector operation corresponding to typical photon fluxes for IR detection. In low injection, a single exponential fit can be used to extract a minority carrier lifetime. However, weak emitters and/or inefficient detection can result in the TRPL signal falling beneath the system noise floor before the low-injection condition is satisfied. In such a situation, fitting to different portions of the decay signal will give very different lifetimes, as shown in Fig. 2c, making the extraction of an accurate minority carrier lifetime challenging in the case of large-area samples and effectively impossible for a single pixel. This limitation becomes particularly problematic for long wavelength materials, where high-speed and high-sensitivity detectors are harder to come by and where non-radiative recombination can dominate carrier dynamics and make for inefficient emission, especially at non-cryogenic temperatures 16,17 . Single-pixel time-resolved microwave resonator response. Although TMR alleviates many of the challenges associated with TRPL, the large free-space microwave probe requires a correspondingly large and uniform optical pump signal. In contrast, the μ-TRMRR approach uses a resonant microwave circuit, which can strongly localize and enhance the RF field, to provide a strong overlap with micro-scale volumes of the material under test. In addition, the use of on-chip microwave probes allows for direct electrical readout of carrier dynamics in our materials. Figure 3a shows the experimental (solid) and modeled (dashed) S 21 parameters for the resonant microwave circuit used in our µ-TRMRR set-up, which consists of a microstrip SRR coupled to a microstrip busline. As the bare pixel is unintentionally doped (UID; see Methods), its dark conductivity can inadvertently short the resonator; furthermore, filling up the air gap with a larger index material capacitively loads the SRR, which can also weaken the resonance. To minimize these effects, a thin layer of insulating hexagonal boron nitride (hBN) is placed before the placement of the pixel, which reduces both shorting and capacitive loading. A clear bandstop feature is observed at~16.5 GHz for the bare SRR, which shifts to lower frequency (and weakens in magnitude) when the SRR is loaded with the InAsSb pixel and hBN spacer. The transmission line model (described in Methods) for the bare and loaded circuit is shown in Fig. 3b, where the hBN and pixel are modeled as a shunt across the SRR consisting of a series combination of capacitance (hBN) and conductance (pixel). Optical excitation of the pixel generates electron-hole pairs (EHPs), increasing the pixel conductivity and thus modulating the circuit's S 21 parameter. This modulation of the S 21 parameter can clearly be observed in the modeled and experimental RF spectra of our device under illumination in Fig. 3a, Þ with respect to carrier concentration is key to extracting accurate, low-injection lifetimes. From the S 21 plots in Fig. 3a, we observe that photoexcitation of carriers results in a negative ΔS 21 (when measured on resonance, at the dip in the dark S 21 spectra), which must limit the dynamic range of the circuit response to the dark S 21 value. Under continuous wave (CW) excitation (Fig. 4a), we see a clear saturation in the ΔS 21 with increasing laser intensity, which could result from the circuit response saturation or, alternatively, excitation power-dependent mobility or lifetimes in the InAsSb, the latter coming from the increased contribution of fast non-radiative recombination mechanisms (Auger recombination in particular) to the total recombination rate 6,15,[18][19][20] . However, our modeled ΔS 21 (Fig. 4b), as a function of increasing shunt conductance in the circuit, matches the experimental ΔS 21 data under CW illumination, suggesting the observed saturation (at these excitation powers) is mostly intrinsic to the SRR circuit and unrelated to carrier lifetime. Figure 4c shows the excitation pulse energy dependence of the µ-TRMRR signal. Measuring the initial response amplitude |µ-TRMRR| vs. pulse energy, we can obtain an experimental picture of the circuit response (independent of carrier lifetime), which also shows a clear saturation with increasing excitation energy. In Fig. 4d, we plot both the |µ-TRMRR| signal and the circuit response to CW excitation, assuming constant minority carrier lifetime (red axes) as a function of excess carrier concentration (see Methods for details) as well as the modeled circuit response as a function of photo-induced conductance (black axes). Both the |µ-TRMRR| data and CW response show a clear saturation at carrier concentrations greater than~10 17 cm −3 , near or slightly larger than the measured background carrier concentration in the InAsSb. For carrier concentrations less than~10 16 cm −3 , a region corresponding to the low-injection regime, our circuit response remains linear, in agreement with our modeled response for small photo-induced conductances. The results summarized in Fig. 4d suggest that although the circuit response saturates at large excess carrier concentrations, the strong sensitivity to carrier concentration and the linear response of our system across several orders of magnitude of carrier concentration offers a range of operation well-suited for exploring Auger, radiative, and Shockley-Read-Hall lifetimes in IR detector materials. This result points to the primary advantage of our µ-TRMRR technique when compared with TRPL, which requires significant trade-offs between detector sensitivity and speed, and which for fast detector response times and weakly emitting samples, cannot accurately measure carrier dynamics at low carrier concentrations. Comparison of lifetime techniques. Figure 5a shows the temperature-dependent µ-TRMRR data from the single InAsSb pixel, at the same pump energy used for the TRPL data in Fig. 2 (but with only 2000 averages, compared with the 50,000 averages required for the TRPL). A clear improvement is observed in the µ-TRMRR SNR, with fitting possible to much longer times. Figure 5 also shows a direct comparison of the TRPL data from the array of InAsSb pixels (Fig. 5b) and our µ-TRMRR results from a single InAsSb pixel (Fig. 5c) at 300 K (both with 2000 averages, for an accurate comparison). To achieve SNR adequate to extract a lifetime, the pixel array requires pumping with a pulse energy of 68 nJ, whereas a comparable signal is observed from the µ-TRMRR with only 68 pJ of pulse energy (equivalent to 35.5 fJ incident on the pixel). Even with the three orders of magnitude attenuation in pump energy, the µ-TRMRR allows for fitting to the tail of the emission signal, where the tail of the TRPL emission is well below the noise floor. Thus, the µ-TRMRR approach has significant benefits over TRPL, where weak signals from poor emitters or long carrier lifetimes (effectively stretching photon emission over longer time intervals), can lead to low SNR and thus inaccurate measurement of lifetimes. Moreover, the significant improvement in our time-response data using the µ-TRMRR approach comes from the response of a single pixel, as opposed to the TRPL data, which comes from the collective response of more than 1000 pixels. As TRPL requires three orders of magnitude more power with >1,000 pixels, our µ-TRMRR approach offers at least a~10 5 improvement in sensitivity when compared with TRPL performed with a conventional, high-speed MCT. Alternatively, by calculating the absorbed optical energy (see Methods) required to achieve comparable responses from the pixel array (16.6 nJ) and the single pixel in our μ-TRMRR system (35.5 fJ), we similarly obtain a > 10 5 improvement in sensitivity. The lifetime data from TMR, our µ-TRMRR technique, and TRPL (with MCT and InSb detectors) are compared in Fig. 6. We observe that the TMR, µ-TRMRR, and TRPL (with an InSb detector) all give similar lifetimes and temperature dependence, indicating that the increased sensitivity of the µ-TRMRR approach does not come with an associated penalty in the accuracy of lifetime extraction. However, the MCT detector TRPL data show a clear discrepancy in extracted lifetimes, particularly as temperature increases and emission from the material decreases in efficiency. This effect again points to the primary Fig. 3 Modeling the coupled pixel-resonator circuit. a Experimental (solid) and modeled (dashed) S 21 spectra for microstrip resonator without pixel (black), after loading with hexagonal boron nitride (hBN) and InAsSb pixel (red), and under illumination of the InAsSb pixel post-loading (blue). b Equivalent circuit model of a split-ring resonator (SRR) capacitively coupled to a microstrip transmission line terminated by a load. The hBN/ pixel structure is depicted as a shunt with parallel capacitance (hBN) and variable conductance (InAsSb pixel) deficiency of the TRPL method as described earlier in the discussion of Fig. 2c, where weak emission and/or low detector sensitivity prevents the accurate measurement of low carrier injection lifetimes in the tail of the decay from poorly emitting samples. The use of an InSb detector yields bulk InAsSb lifetimes that closely match with our single-pixel µ-TRMRR, as the InSb detector is far more sensitive than the MCT detector, allowing for a more accurate fit further into the tail of the TRPL signal (lowinjection carrier concentrations are now above the noise floor). However, the sensitivity comes with a significant speed and spectral range penalty (see Methods); furthermore, the sensitivity would still not be sufficient to time resolve a single, micro-scale pixel. In summary, we have presented the μ-TRMRR technique, capable of measuring minority carrier lifetimes in micro-scaled volumes of optoelectronic materials without the need for contacts or current collection. When compared with the results from TRPL measurements on the same material using an MCT detector, we show that the μ-TRMRR technique achieves a comparable signal using a factor of >10 5 less absorbed energy, which allows us to characterize micron-scaled volumes of weakly emitting IR materials. Our µ-TRMRR results match well with lifetimes extracted from both conventional TMR and TRPL measurements, pointing to the accuracy of our technique for measuring carrier lifetimes in weakly emitting or long-lifetime materials. As our technique measures the material response at microwave frequencies, light emission from our samples is not required and thus there are no collecting optics and optical detectors that have to be adjusted to cater to optoelectronic materials operating in different wavelengths. These advantages are especially important for the characterization of micron-scaled volumes, such as the InAsSb pixels studied here or potential future studies of nano-scaled volumes of 2D materials. Methods Fabrication. The material investigated in this work is grown by Molecular Beam Epitaxy and consists of a 1 μm-thick UID InAsSb layer grown above a 250 nmthick AlAsSb sacrificial layer, with both layers' lattice matched to the GaSb substrate. The background carrier concentration of the n-InAsSb was measured to be 2.11 × 10 16 cm −3 and 2.4 × 10 17 cm −3 at 77 K and 300 K, respectively, by Hall measurements, although this approach may overestimate the true carrier concentration 21 . Fabrication of the 24 μm × 24 μm InAsSb pixels and their transfer to carrier substrates are described in detail in ref. 22 . The pixel period pre-transfer was 30 μm, and following transfer to the thermal release tape the pixel array consisted of a >4 mm × 4 mm pattern with a fill factor of~0.5. The SRRs are fabricated on a 500 µm-thick 99.6% aluminum oxide substrate via standard photolithography and metallization with Ti/Au layers of 10/500 nm on both the top (patterned) and bottom (continuous groundplane) of the substrate. The microstrip and SRR widths are both 50 μm. The SRR side length is 1 mm, with a 30 μm coupling gap between the SRR and microstrip busline. Individual pixels and hBN spacers are transferred to the 10 μm gap of the SRR using the pick-and-place process detailed in ref. 23 . Optical characterization. The as-grown InAsSb and the transferred pixel array are both characterized by PL spectroscopy and TRPL. For the former, the samples are placed in an optical cryostat and optically pumped by a modulated 980 nm laser diode. Emission spectra are collected using amplitude modulation step-scan with a Bruker v80V Fourier transform IR spectrometer. In the TRPL experiment, samples in an optical microscope cryostat are excited by a~1 ns pulsed 1064 nm laser emitting 6.8 µJ pulses at a 10 kHz repetition rate. Emitted light is collected by an off-axis parabolic mirror and focused onto a Kolmar LN 2 -cooled high-speed (~4 ns time constant) MCT detector using a germanium lens. The resulting MCT signal is recorded with a high-speed oscilloscope (with 50,000 averages, to increase signal to noise and allow for measurement of the tail of the response). In addition, we performed a second set of TRPL measurements on the as-grown InAsSb using a lower speed (~23 ns), higher responsivity InSb detector, and a 1535 nm laser (with~5 ns pulses). Pulse energy in the TRPL experiments is controlled by the use of neutral density filters. The carrier lifetimes are extracted by a single exponential fit to the tail of the TRPL response, ideally corresponding to the low-injection regime of significant import for detector materials. Both PL and TRPL data are collected for a range of sample temperatures for both the bulk as-grown material and a largearea pixel array. Microwave characterization. For the TMR measurements, a tunable optical parametric oscillator with 10 ns pulse width is used to excite the sample above the InAsSb band edge and the modulated reflection from a 94 GHz microwave Gunn diode is collected by a microwave detector. The transient TMR signal is then amplified using a wide-bandwidth preamplifier and averaged on a digital oscilloscope. Details of the TMR measurement system can be found in ref. 6 . For μ-TRMRR, the SRR circuits are measured in an Advanced Research Systems RF cryoprobe stage with a ZnSe window for optical access. RF S 21 spectra of the SRR circuits are collected using an Anritsu Shockline MS46122A Vector Network Analyzer with short-open-load-thru calibrations performed to move the reference plane to the probe tips. For the µ-TRMRR experiments, the pixels are optically excited using the above 1064 nm pulsed laser. Excitation pulse energy is controlled via neutral density filters up to ND5. Excitation pulse intensity is calculated by measuring the 2D beam profile of the laser at the position of the pixel. Time response of the pixel is measured by driving the SRR circuit on resonance using an RF source generator at fixed frequency. The transmitted RF signal is collected by a Pasternack 8013 zero-bias Schottky diode (with a capacitance of 470 pF) and fed into a digital oscilloscope, and the resulting time response is collected and averaged. The response of the circuit to CW excitation is obtained by modulating a 785 nm laser, focused onto the pixel, at~48 Hz. The circuit is driven on resonance and the circuit response is collected by the zero-bias Schottky diode and fed into a lock-in amplifier. Modeling and calculations. We analytically model our circuit in Mathematica using the lumped element model shown in Fig. 3b. Transmission line impedances and permittivities are calculated using standard microstrip expressions 24 . Reflection coefficients and input impedances are determined at each impedance mismatch and are calculated as a function of frequency along with the lumped element impedance of the SRR 25 . Using a commercial finite element method (FEM) simulation software (www.ansys.com/products/electronics/ansys-hfss), the surface currents and electric fields of our circuit were plotted. On resonance, we observe a divergence in the current on the busline, which is out of phase with the electric field across the coupling region, suggesting that our SRR is capacitively coupled to the microstrip busline. Thus, we model the resonator as a capacitively coupled shunt resistor-inductor-capacitor (RLC) circuit 26,27 . To calculate excess carrier concentration, Δn, for pulsed excitation, the energy incident upon the pixel is determined by measuring the power and 2D spatial profile of the laser beam at the position of the sample. The pulse energy is simply the measured power divided by the repetition rate. As the beam width is larger than Comparison of lifetime measurement techniques. Extracted carrier lifetimes as a function of temperature for time-resolved microwave reflectance (TMR) on as-grown InAsSb (orange), time-resolved photoluminescence (TRPL) on as-grown InAsSb using InSb (cyan) and mercury cadmium telluride (MCT) (light blue) detectors, TRPL from pixel array using MCT detector (blue), and single-pixel µ-TRMRR (red) techniques. TRPL (InSb), TMR, and μ-TRMRR agree well, but as shown, when using a less sensitive detector (such as MCT), the TRPL signal falls beneath the noise floor before the low-injection condition is satisfied, preventing an accurate lifetime extraction We see that a comparable response is obtained from the μ-TRMRR with >10 5 less energy than is needed for TRPL the pixel, the total energy incident upon the pixel is simply the product of the amplitude of the Gaussian fit to the beam profile, the pixel area, the cryostat window transmission, and the transmission at the InAsSb/air interface, resulting in the 35.5 fJ incident upon the single pixel in Fig. 5b; for 5c, as the pixel cluster is larger than the spot size, the total beam energy is multiplied by an approximate pixel array fill factor, as well as the window and interface transmissions, which results in~16.6 nJ. For the CW excitation, we measure the beam power and 2D profile in a similar manner and then calculate an EHP generation rate (G) assuming all photons entering the pixel are absorbed. The carrier concentration will then be Δn; Δp / GτðΔn; ΔpÞ, assuming a carrier lifetime τðΔn; ΔpÞ. We obtain an approximate Δn by taking the product of the calculated generation rate and a constant carrier lifetime (τ 250ns). Data availability Data that support these findings are available from the corresponding author upon request.
5,965.4
2019-04-09T00:00:00.000
[ "Physics", "Engineering", "Materials Science" ]
A Low ‐ Spin Co II /Nitroxide Complex for Distance Measurements at Q ‐ Band Frequencies : Pulse dipolar electron paramagnetic resonance spectroscopy (PDS) is continuously fur ‐ thering the understanding of chemical and biological assemblies through distance measurements in the nanometer range. New paramagnets and pulse sequences can provide structural insights not accessible through other techniques. In the pursuit of alternative spin centers for PDS, we synthe ‐ sized a low ‐ spin Co II complex bearing a nitroxide (NO) moiety, where both the Co II and NO have an electron spin S of 1/2. We measured Co II ‐ NO distances with the well ‐ established double electron– electron resonance (DEER aka PELDOR) experiment, as well as with the five ‐ and six ‐ pulse relaxa ‐ tion ‐ induced dipolar modulation enhancement (RIDME) spectroscopies at Q ‐ band frequencies (34 GHz). We first identified challenges related to the stability of the complex in solution via DEER and X ‐ ray crystallography and showed that even in cases where complex disproportionation is unavoid ‐ able, Co II ‐ NO PDS measurements are feasible and give good signal ‐ to ‐ noise (SNR) ratios. Specifi ‐ cally, DEER and five ‐ pulse RIDME exhibited an SNR of ~100, and while the six ‐ pulse RIDME ex ‐ hibited compromised SNR, it helped us minimize unwanted signals from the RIDME traces. Last, we demonstrated RIDME at a 10 μ M sample concentration. Our results demonstrate paramagnetic Co II to be a feasible spin center in medium magnetic fields with opportunities for PDS studies in ‐ volving Co II ions. Introduction Pulse dipolar electron paramagnetic resonance (EPR) spectroscopy (PDS) allows measuring the distance between two or more electron spins in the nanometer range by exploiting their magnetic dipole moment [1][2][3]. Typically, PDS measures the dipolar coupling, , between two spins, which has an inverse cube root dependence on their distance ( ), according to equation [4]: where is the permeability of the vacuum, is the Bohr magneton, , are the factors of the two electron spins, ħ is the reduced Planck constant, and is the angle between the vector connecting the two spin centers and the external magnetic field. In this work, we focus on two PDS techniques, the double electron-electron resonance [2,[5][6][7] (DEER/PELDOR) and relaxation-induced dipolar modulation enhancement [8] (RIDME) experiments (pulse sequences in Figure 1), to measure the distance between two different spin centers: that of paramagnetic Co II and a nitroxide (NO) radical placed on a chemical model system. In DEER and RIDME, the dipolar coupling is measured with pulses applied at two or one microwave (mw) frequencies, respectively. In DEER, one set of spins is monitored on the observe frequency (νobs), while another set of spins is flipped on the pump frequency (νpump) (Figure 1a). If the distance to be measured involves two different types of spin centers, e.g., one paramagnetic metal ion (Cu II , Gd III , Mn II with electron spin S = 1/2, 7/2, 5/2, respectively) and one organic radical (nitroxide or trityl radical, S = 1/2) the pulses are placed to observe one type of spin, while flipping the other type of spin. In order to obtain high effective sensitivity, one typically pumps the NO or trityl spin that exhibits narrow EPR spectrum, while observing the paramagnetic metal ion (see a comparison of their EPR linewidths in Table S1, Supplementary Materials). In RIDME, one type of spin is observed in the frequency νobs, while the other type of spin is let to flip spontaneously by its longitudinal relaxation during the time interval Tmix [9] (Figure 1b,c). Therefore, RIDME performs well in systems involving a slow and a fast relaxing spin, such as a fast relaxing paramagnetic metal ion and a slower relaxing organic radical [10][11][12][13][14][15][16][17][18], but it has also been expanded to two paramagnetic metal centers [16,[19][20][21][22][23][24][25][26][27], as well as between organic radicals [28][29][30]. The increased sensitivity of RIDME in such mixed systems stems from the inherent increased modulation depth (Δ) of the experiment as a result of a large amount of spontaneously flipped spins, in contrast to the limited excitation bandwidth by mw pulses in DEER [31]. Additionally, during RIDME, one can exploit working under close to critically coupling conditions as only one frequency needs to be accommodated in the resonator profile, in contrast to DEER, where an over-coupled resonator is required in order to accommodate two frequencies. Another advantage of RIDME is that it is free of orientation selection (OS) effects with respect to the flipped spin, assuming a homogeneous longitudinal relaxation of the paramagnetic metal center along the EPR spectrum, simply because spins from the entire EPR spectrum are flipped during Tmix. It should be also noted that in contrast to high-spin Gd III [23,25,26] and Mn II [15,24] ions, where RIDME affords overtone frequencies due to the excitation of EPR transitions between multiple spin manifolds, this is not the case for 1/2 spin metal ions, such as of Co II studied here. Low-spin Co II , like Cu II , exhibits only one spin transition (−1/2→+1/2); therefore, the analysis and interpretation of data is straightforward and similar to DEER. The major limitation of RIDME is the steep background signal decay that is particularly relevant for protonated samples [11,32] and, to a lesser extent, some systematic signals (artifacts) [16,30] appearing in the RIDME traces. While the first are related to sample concentration and the matrix used, the latter are related primarily to experimental parameters. Recently, Abdullin et al. introduced the six-pulse RIDME sequence (Figure 1c), which was shown to significantly minimize these unwanted signals [30]. In the present work, we report on the use of DEER and RIDME with a pair of Co II /NO spins engineered into a chemical model system. Low-spin Co II is a less explored metal ion than Cu II , Gd III or Mn II , even though its favorable spectroscopic properties (S = 1/2 and not very fast relaxation [46]) make it suitable for PDS. Surprisingly, even though there have been developed ligands that bind to Cu II [47][48][49], Gd III [37] or Mn II [50,51] ions serving as spin labels, no such ligand has been developed for Co II for PDS. One challenge of Co II is its broad EPR spectrum of ~75 mT at the X-band, though it is comparable to that of Cu II (see Table S1, Supplementary Materials), whereas the synthesis of ligands that afford lowspin Co II is probably the most challenging part. Additionally, Co II with terpyridine ligands is known to behave as a spin crossover system above temperatures of 30 K [52,53]. So far, PDS studies on Co II are sparse and involve methodological developments. The first report on Co II demonstrated, for the first time, the use of broadband (wideband uniform rate smooth truncation, WURST) pulses to improve the signal-to-noise ratio (SNR) of Co II -NO DEER at the X-band by increasing the number of pumped Co II spins [54]. In another report, three spin effects were deliberately manifested in a NO-spacer-Co II -spacer-NO chemical model using WURST pulses that excited the entire NO EPR spectrum at X-band DEER measurements [46]. Additionally, OS effects in a Co II /NO system were studied with W-band DEER and RIDME, where the g-anisotropy of NO was resolved and only the low g component of Co II could be observed due to bandwidth limitations [10]. On the other hand, Co II has been taken up by paramagnetic nuclear magnetic resonance (NMR) studies involving high-spin Co II (S = 3/2) bound to a protein via a thio-reactive EDTA ligand [55,56], via the doubleHis motif [57], via a ligand that binds with click chemistry to RNA [58] or by replacing their diamagnetic counterparts in metal-dependent proteins [59,60]. Here, we expand the methodology to Co II -NO PDS at Q-band frequencies on a wellcharacterized Co II /NO chemical model. Specifically, we employ and discuss the performance of the standard five-pulse RIDME, as well as its six-pulse variant, and of DEER. We show that five-pulse RIDME and DEER have comparable SNR, whereas the SNR of six-pulse RIDME is somewhat compromised. We further show that the artifacts of the five-pulse RIDME are minimized in the six-pulse experiment also for the Co II /NO pair. Last, we show Co II -NO RIDME measurements to be feasible on a 10 μM sample. Synthesis of the Chemical Model The aim was to synthesize a chemical model system that bears a low-spin Co II and a NO with the distance between NO and Co II in the accessible distance range for PDS, minimizing through bond communication between the two spins, i.e., the spacer between the NO and Co II should not feature extended conjugation. Low-spin Co II is afforded by an octahedral geometry around the metal center with strong-field ligands, such as terpyridine. Thus, we coordinated the metal with two terpyridine-based ligands, one of which is functionalized with an NO at its end. To do so, we first synthesized a precursor, (terpyridine)Co II Cl2, from terpyridine and dichlorobis(triphenylphosphine)Co II (1a, see reaction in Scheme 1 and Figure S1, Supplementary Materials), in which Co II has a mononuclear pseudo-square pyramidal geometry found previously from X-ray structure determination [61]. Then, 1a was reacted with a previously characterized NO-labeled terpyridine ligand L [62] to form the target complex 1, which was isolated as a solid after precipitation with PF6 − counter ions and characterized by mass spectrometry and elemental analysis (see Section 4 and Figure S2, Supplementary Materials). The ester bond in L is expected to disrupt the conjugation between NO and Co II spins, as it was shown previously that the introduction of an ester bond afforded negligible conjugation between NO and Cu II spins [63]. Additionally, in L2Co II complex, DEER [46] and RIDME [10] did not show through-bond communication between the spins. Scheme 1. Synthetic procedure of precursor 1a and of target complex 1. PDS on 1 Initial attempts to perform PDS distance measurements on 1 in various organic solvents or their mixtures failed due to poor solubility of the complex in organic media as well as 'bad glass' formation upon sample freezing and, subsequently, fast relaxation properties of the paramagnetic species [64]. Generally, we found that dissolving the complex in a small percentage of coordinating solvent and then adding the non-polar organic solvent improved solubility and helped the formation of 'good glass' upon sample freezing. The solvents that worked well here were DMF-d7/C7D8 (1/9) for RIDME and DMF/2-MeTHF (1/9) for DEER samples. Deuteration in RIDME is necessary, as protons can significantly affect the background decay of the experiment, complicating data analysis [32]. Additionally, we found that weakly coordinating anions further improve the relaxation properties of the Co II . Therefore, we proceeded to in situ exchanging the PF6 − ions with the more weakly coordinating anion BPh4 − [65] by adding excess of NaBPh4 salt before addition of the solvents (see Section 4). We performed Co II -NO DEER at 15 K at Q-band frequencies in 1 mM solution of 1 ( Figure 2). Slowing down the Co II relaxation is crucial, as a fast transverse relaxation of Co II would disfavor DEER when observing Co II (see Figures S4-S6, Supplementary Materials for X-and Q-band relaxation data on 1). We initially performed the DEER by pumping NO and observing Co II with the setup shown in Figure 2a, where the blue and red lines indicate the observe and pump positions, respectively, and Δν is the pump-probe frequency offset of 150 MHz (5.3 mT). The experiment exhibited a steep background decay due to pumping NO spins, which are in high concentrations, affording a modulation depth, Δ, of ~26% and a Co II -NO distance of ~2.6 nm when using the DeerAnalysis [66] software. The distance is in agreement with the X-ray structure of L [62] and previous distance measurements on similar compounds [10,11,46], and Δ is close to the value on bis-nitroxide-labeled protein samples [67] under our spectrometer and experimental conditions, indicating that the majority of chemical species in the solution are complex 1. The sensitivity (often synonymous with the SNR) was calculated (see Section 4 and Supplementary Materials, Table S4) to be 103. Throughout the text we refer to the SNR as the modulation-to-noise ratio, whereas the sensitivity corrected for different numbers of accumulated echoes and repetition rates is referred to as St (sensitivity per unit time, given in the Supplementary Materials, Table S4). We additionally performed a DEER measurement by pumping Co II and observing NO spins using the same setup as in Figure 2a, with the pump-probe positions exchanged. In this case, the pump pulse excited only a small fraction of the broad Co II EPR spectrum, affording a less pronounced background decay and a Δ of ~0.6%. Using this setup, the SNR was 11, i.e., 10 times lower than when pumping NO, as expected. Nonetheless, the measurement afforded again a ~2.6 nm Co II -NO distance with high reliability. It should be mentioned that while Co II -NO DEER is feasible at the Q-band, the corresponding Cu II -NO measurements are challenging due to the large spectral separation of Cu II and NO spins of ~20 mT exceeding the bandwidth of most setups [46]. Here, we also tested directly whether the ligands exchange in solution by performing NO-NO DEER ( Figure S7, SI). If ligand exchange occurs, the species expected to form in solution are L2Co II , (terpyridine)2Co II and 1 (in 1:1:2 ratio) and we should recover a NO-NO DEER oscillation, originating from L2Co II complexes. The NO-NO DEER was performed by observing and pumping NO with Δν = 80 MHz (2.9 mT) at 20 K under conditions optimized for the NO, i.e., with a repetition time of 20 ms. NO-NO DEER is known to perform optimally at 40-60 K [68,69]; however, here, a lower measurement temperature was necessary due to the paramagnetic relaxation enhancement of the NO spins by Co II . Previously, we found that a temperature of 10-20 K is optimal for measuring NO-NO DEER in the presence of Co II [46], in agreement with the X-band NO relaxation data (Supplementary Materials, Figure S5). The NO-NO DEER trace featured again a steep background decay due to the high concentration of NO spins and a Δ of ~17%. The NO-NO distance was found to be 5.2 nm, which is double the Co II -NO distance originating from L2Co II complexes, suggesting that partial ligand mixing occurred before freezing the sample. Moreover, ligand mixing was also observed on a similar Co II complex that does not bear a NO group using X-ray structure determination (see details in Supplementary Materials). Both NO-NO DEER and the X-ray demonstrate the kinetic lability of terpyridine ligands with Co II ions that might affect the performance of PDS on 1. We then proceeded to perform Co II -NO RIDME measurements on 1 mM solution of 1 at 15 K. The experiment was performed by observing at the maximum of the NO EPR spectrum (indicated with the green line on Figure 2a) to minimize OS effects and to obtain maximum sensitivity, whereas the Co II spin flip was achieved spontaneously during the Tmix. We performed both the five-pulse and the newly introduced six-pulse sequence. The most commonly used five-pulse RIDME is known to exhibit artifacts that alter the background and might affect the shape of the distance distribution [16,30]. These artifacts were found to appear at times t = τ1 and t = τ2 − τ1 [30], and the six-pulse RIDME was shown to significantly minimize them. On the downside, six-pulse RIDME can feature artifacts at t = τ2 − 2τ1, which, however, can be truncated during data analysis. As we worked at Qband and the sample was dissolved in a deuterated matrix, each RIDME experiment was recorded as a unique measurement with τ-averaging over one period of the inverse deuterium Larmor frequency proving sufficient to remove unwanted electron spin echo envelope modulation (ESEEM), without the need to perform a reference measurement. The estimation of the Tmix was performed by the inversion recovery profile of Co II spins, as well as from our previous data on a similar Co II complex [10]. The primary RIDME data were analyzed by fitting a fifth-order polynomial background function due to the pronounced background decay of RIDME. Δ was found to be ~36% and ~32% and the SNR to be 114 and 73 for the five-and six-pulse sequences, respectively. The modulation depth was less than the 45% expected for quantitative metal-NO pairs [11,16,44], indicating again partial ligand mixing in agreement with the Co II -NO and NO-NO DEER data. The SNR of the five-pulse RIDME was similar to that of the DEER pumping NO (see Table S4, Supplementary Materials), whereas the six-pulse RIDME measurement exhibited lower SNR. The five-pulse sequence exhibited an artifact at 1.4-1.6 μs, which we assign to the τ2 − τ1 artifact observed by Abdullin et al. [30]. This artifact was significantly smaller in the six-pulse RIDME measurement. The Co II -NO distance distribution was again found to be centered at ~2.6 nm, in agreement with the DEER experiments and similar metal complexes [10,15,39,46]. Lastly, we proceeded to test the performance of Co II -NO RIDME on a sample two orders of magnitude lower in concentration, i.e., on a 10 μM sample of 1 (Figure 3). Again, the five-and six-pulse RIDME sequences were performed on the maximum of the NO spectrum (green line on Figure 2a). As the sample concentration was significantly lower, we found the longitudinal relaxation (T1) and other T1-related relaxation effects to be significantly slower, and we performed RIDME at 30 K, as we also did not see significant modulation at 15 K. In this sample, the RIDME experiment exhibited a reduced Δ of ~18% and ~13%, with SNR values of 29 and 1 for the five-and six-pulse sequences, respectively. The lower Δ of the 10 μM sample might be due to the coordination of DMF-d7 on the Co II ion upon sample dilution or due to a suboptimal Tmix. The SNR of five-pulse RIDME on the 10 μM sample was approximately four times lower than the 1 mM sample. Partially, the lower SNR can be attributed to the lower Δ of the 10 μM sample. The rest of the loss in SNR comes from the lower sample concentration itself. Again, the five-pulse RIDME exhibited an artifact at short times (300 ns), as well as at ~1.1-1.5 μs, which we tentatively assign to those observed by Abdullin et al. [30], as both were reduced in the six-pulse experiment. Overall, we could measure Co II -NO RIDME on a 10 μM sample of 1, the SNR of which was compromised by the sample properties, nonetheless not affecting the reliable determination of the distance distribution, which is in agreement with the PDS data on the higher concentration sample. Conclusions In this work, we reported the synthesis of a new Co II /nitroxide complex for DEER and RIDME measurements at Q-band frequencies. The tailor-made complex geometry afforded a low-spin Co II with favorable spectroscopic properties for DEER and RIDME. Particularly, Co II -NO RIDME was employed on a 10 μM sample and the application of the six-pulse sequence helped us eliminate unwanted signals from the time traces. The overall SNR of five-pulse RIDME and DEER measurements were similar; however, Co II -NO RIDME did not reach the potential of the Cu II -NO pair, where applications down to 500 nM have been reported [22,44]. One limitation of the SNR comes from the sample properties. We have shown using X-ray crystallography on a similar Co II complex, as well as with NO-NO DEER on 1, that a fraction of the complex disproportionates in coordinating solvents. The issue of complex disproportionation becomes resolved in a scenario where water-soluble ligands would be designed that bind Co II tighter than terpyridine. This would, in turn, allow improved RIDME performance. As reports on Co II ions in PDS are sparse, measurements at Q-band frequencies, becoming widely adopted by most EPR laboratories, allow further establishing Co II for PDS. We further revealed some of the challenges that have to be met before this metal ion becomes a promising candidate for applications in biomolecular samples. General Synthesis Conditions All commercially available reagents were used as purchased: dichlorobis(triphenylphosphine)Co II (Sigma Aldrich, St. Louis, MO, USA), terpyridine (Alfa Aesar, Haverhill, MA, USA) and NH4PF6 (Acros Organics, Geel, Belgium), while the synthesis of ligand L has been reported previously [62]. Solvents were of laboratory-grade purity, reactions were performed in open air and rt refers to room temperature (20-25 °C). Infrared (IR) spectra were acquired on a Shimadzu Fourier transform IR Affinity-1 Infrared spectrometer. Nuclear magnetic resonance (NMR) spectra were acquired on a 500 MHz Bruker Ascend spectrometer in the deuterated solvent stated. Chemical shifts are quoted in parts per million (ppm) and referenced to the residual solvent peak(s). Mass spectrometric data were acquired via atmospheric pressure chemical ionization (APCI) and matrix-assisted laser desorption/ionization (MALDI) at the EPSRC National Facility for Mass Spectrometry, Swansea. Elemental analysis was performed in London Metropolitan University, where the solid samples were weighed using a Mettler Toledo high-precision scale and analyzed using ThermoFlash 2000. X-ray Crystallography X-ray diffraction data for compound 2ʹ (see structure in Supplementary Materials) were collected at 173 K using a Rigaku MM-007HF High Brilliance RA generator/confocal optics with XtaLAB P100 diffractometer (Cu Kα radiation (λ = 1.54187 Å)). Intensity data were collected using both ω and φ steps, accumulating area detector images spanning at least a hemisphere of a reciprocal space. Data were collected and processed (including correction for Lorentz, polarization and absorption) using CrystalClear [72]. The structure was solved using charge-flipping methods (Superflip [73]) and refined using full-matrix least-squares against F 2 (SHELXL-201/3 [74]). Non-hydrogen atoms were refined anisotropically, and hydrogen atoms were refined using a riding model. The thin, platy crystals diffracted weakly at higher angles, even with long exposures. This weaker high-resolution data lead to elevated values of Rint, poor observed-to-unique data ratios and minor discrepancies in some bond lengths. The structure could nevertheless be unambiguously determined. All calculations were performed using the CrystalStructure EPR Sample Preparation 10 equivalents of NaBPh4 were added to solid 1, the solids were taken up in DMF (or DMF-d7) and mixed thoroughly via pipetting until everything was dissolved, forming a transparent brown solution. Then, C7D8 or 2-MeTHF was added, the mixture was again mixed thoroughly, transferred to the EPR tube (3 mm outer diameter, OD) and frozen in liquid nitrogen. The final sample volume was 75 μL. The 10 μM sample was prepared by thawing the 1 mM sample and diluting it with a pre-mixed solution of DMF-d7/C7D8 (1/9). EPR Spectroscopy All EPR data were recorded on a Bruker ELEXSYS E580 pulsed X-band (9.7 GHz) or Q-band (34.0 GHz) spectrometer including the second frequency option (E580-400U). Pulses were amplified by travelling wave tube (TWT) amplifiers (1 kW at X-band and 150 W at Q-band) from Applied Systems Engineering. An MD5 dielectric ring resonator (Xband) and a 3 mm cylindrical resonator ER 5106QT-2w in TE012 mode (Q-band) and standard flex line probe heads were used. The temperature was stabilized using a continuous flow via a variable temperature helium flow cryostat from Oxford Instruments (Xband) or a cryogen-free variable temperature cryostat from Cryogenic Ltd. (Q-band). Echo-Detected EPR Spectrum The echo-detected EPR (ED-EPR) spectrum was recorded at 15 K and optimized for the Co II spin using the ( /2 − − − -echo) sequence monitoring the echo intensity while sweeping the magnetic field. Here, /2, pulses were set to 12, 24 ns, to 400 μs and repetition rate to 407 μs. Inversion Recovery Data Inversion recovery (I.R.) data were collected using the ( − T − /2 − − − -echo) sequence monitoring the echo intensity as a function of the interval T. Co II and NO X-band I.R. data were recorded in the temperature range 5-50 K at 340.0 mT and 345.3 mT, respectively, using /2, pulses of 20, 40 ns, inversion pulse of 20 ns and of 200 μs; the time increment and repetition time varied with the temperature. Co II Q-band I.R. was recorded at 1160 mT at 30 K using /2, pulses of 12, 24 ns, respec-tively, 24 ns inversion pulse and of 800 μs, repetition time 1 ms and time increment 250 ns (see Figures S4-S6, Supplementary Materials). Phase Memory Time Data Phase memory time data were collected using a Hahn echo ( /2 − − − -echo) sequence monitoring the echo intensity as a function of the interval . Co II and NO X-band phase memory time data were recorded in the temperature range 5-50 K at 340.0 mT and 345.3 mT, respectively using /2, pulses of 16, 32 ns, a starting of 120 μs and time increment of 12 ns; the repetition time varied with the temperature. Co II phase memory time data exhibited strong Co II ESEEM due to interaction of the un-paired electron with the nuclear spin of Co II (I=7/2), whereas NO data exhibited 1 H ESEEM. NO and Co II Qband phase memory times were recorded at 1213 and 1196 mT at 15 K using /2, pulses of 12, 14 ns (NO), 16, 32 ns (Co II ), a starting of 380 μs and time increment of 20 ns and repetition time 5 ms (see Figures S4-S6, Supplementary Materials). Data Analysis The primary DEER and RIDME data were transformed into distance distributions using the DeerAnalysis2018 [66] software and Tikhonov regularization with the L-curve [76] criterion. The background contributions to the primary DEER data were removed by fitting a background homogeneous to three dimensions for the DEER and fifth order polynomial function for the RIDME data using the default background start value giving the Δ values reported herein. An exception is the 5-pulse RIDME of 1 mM sample, where the background start was set manually to 100 ns, as default background start was unrealistic. In general, the RIDME data on 1 mM sample could also be fitted by fitting the dimensionality of the background or with different order polynomial functions, but as the 10 μM data could only be fitted with fifth order polynomial function, we analyzed all RIDME data similarly. The regularization parameter was 10 for the RIDME and 1 and for the DEER data, respectively. The contributions of the background signal were evaluated within the validation tool of the DeerAnalysis2018 program. The validation was performed from 5% to 80% of the time traces for DEER and from 5% to 15% for RIDME, respectively, in 16 trials, and a white noise of level 1.5 in 10 trials was added. In the NO-NO DEER measurements, no noise was added during validation. For all validation procedures, only datasets within 15% of the best root-mean-square deviation were retained (i.e., default prune level 1.15), affording the confidence intervals (gray shadowed areas) of the plotted distance distributions. The color bars below the distance distributions denote the reliability of the distance as follows: green = shape reliable; yellow = mean and width reliable; orange = mean reliable; red = non-reliable. The data were also analyzed with two user-unbiased methods; first, DEERNet, which utilizes neuronal networks to predict the distance distribution [77] within Spinach [78] software in MATLAB R2020b. Second, with simultaneous comparative treatment employing neuronal network analysis and Tikhonov regularization (ComparativeDeerAnalyzer, CDA), which computes the distance distribution and its uncertainty using DeerAnalysis2021b in MATLAB R2020b. The DEERNet and CDA analysis results are given in Supplementary Materials, Figures S8-S20, respectively. CDA could not run for the 5-pulse RIDME measurement on the 10 μM sample. The modulation depth values as calculated automatically from DEERNet and CDA are reported in Table S4, Supplementary Materials. All data are available in reference [79]. Calculation of Sensitivity The sensitivity was calculated similarly to what was described previously [44]. Briefly, it was first calculated as modulation-to-noise ratio, i.e., Δ/noise level, where Δ was calculated automatically in CDA and the noise level was estimated using the imaginary part of phase-corrected and normalized time domain data, again calculated in CDA or, where mentioned, with a self-written MATLAB script. Then, this value was divided by the square root (sqrt) of (number of scans × shots-per-point × τ-averaging × phase cycle) to give the modulation-to-noise ratio normalized for the number of echoes, Se. To account for different repetition rates, Se was multiplied with the sqrt of the inverse repetition rate to yield the sensitivity per unit time (St). The sensitivity values are mentioned throughout the texts and are summarized in Supplementary Materials, Table S4. Informed Consent Statement: Not applicable. Data Availability Statement: Digital data underpinning the results presented in this manuscript.
6,694.8
2022-04-11T00:00:00.000
[ "Chemistry" ]
Controlled Preparation of Single-Walled Carbon Nanotubes as Materials for Electronics Single-walled carbon nanotubes (SWCNTs) are of particular interest as channel materials for field-effect transistors due to their unique structure and excellent properties. The controlled preparation of SWCNTs that meet the requirement of semiconducting and chiral purity, high density, and good alignment for high-performance electronics has become a key challenge in this field. In this Outlook, we outline the efforts in the preparation of SWCNTs for electronics from three main aspects, structure-controlled growth, selective sorting, and solution assembly, and discuss the remaining challenges and opportunities. We expect that this Outlook can provide some ideas for addressing the existing challenges and inspire the development of SWCNT-based high-performance electronics. INTRODUCTION High-performance microprocessors containing very-large-scale integrated circuits (ICs) of silicon-based field-effect transistors (FETs) are the cornerstones of modern computing and communicating applications that dominate the progress of modern industry and our daily life. In order to meet the increasing demand for high performance and more complex application scenarios, researchers have been exploring new electronic materials, such as carbon nanotubes (CNTs), graphene, transition-metal dichalcogenides, and III−V semiconductors. 1−4 Among them, CNTs are of particular interest. 5−8 Semiconducting single-walled CNTs (s-SWCNTs) are applicable for FETs as channel materials due to their unique structure and excellent properties. The quasi-one-dimensional topology and ultrathin tube diameter of SWCNTs are beneficial to minimizing the short-channel effects and realizing superior gate control under extreme device scaling. 9−11 The low carrier effective mass, 1 high and symmetrical carrier mobilities (intrinsically up to 100000 cm 2 /(V s)), 12 high current-carrying capacity, 13 and quasi-ballistic transport 13 of s-SWCNTs enable a high driving capability and high-speed switching at low voltages. The current density and transconductance are respectively 25 μA 13 and 55 μS 10 per nanotube as reported. The high thermal, chemical, and mechanical stability in carrier transport with outstanding flexibility provide devices with resistance to extreme working conditions, such as high temperature, 14 cryogenic temperature, 15,16 high-energy radiation, 17 and strains. 17,18 As onedimensional direct-band-gap semiconductors that exhibit naturally polarized, narrow-banded, and peak-tunable light emission and absorption in the near-infrared spectral range, SWCNTs can also be applied in on-chip optical interconnects. 19−23 Over the past 25 years, SWCNT FET technology has matured in the laboratory. The first p-type transistor fabricated by Dekker et al. in 1997 24 showed a small device current because the Schottky barrier between Pt electrodes and SWCNTs hindered hole injection. With the successive exploitation of Pd electrodes 13,25 and Sc/Y electrodes, 15,26 which exhibit perfect Ohmic contacts with the valence and conduction bands of SWCNTs, respectively, both p-type and n-type SWCNT FETs with performance approaching the ballistic limit have been realized. On this basis, doping-free symmetrical complementary metal oxide semiconductor (CMOS) circuits, 27 digital logic gates, 28−30 and a computer composed of 178 SWCNT transistors 31 were fabricated. In general, SWCNT FETs have the advantages of low power consumption and high frequency. For example, individual SWCNT-based FETs with gate lengths as short as 5 nm outperformed state-of-the-art Si FETs in supply voltage and pitch-normalized current density. 10 SWCNT-array-based ICs exhibited a real speed higher than that of conventional Si ICs with similar gate lengths (Figure 1a,b). 32 The feasibility of integrating SWCNT FETs has been verified by a modern microprocessor comprising more than 14000 FETs ( Figure 1c). 33 Recently, SWCNT ICs have demonstrated rich application potential in fields such as wireless communication, 34 neuromorphic computing, 35 wearable devices, 36,37 and biosensing platforms. 38 Despite these advances, some factors still severely limit the large-scale fabrication and industrialization of SWCNT FETs. The issues of material purity and array assembly are critical ones, as previously revealed by Avouris 39 and Franklin. 40 SWCNTs are categorized into various chiralities indexed by two integers (n,m) that determine the tube diameter and band structure. 41 Only two-thirds of these chiralities that meet the condition of (n − m) MOD 3 ≠ 0 correspond to semiconducting species, of which the band gap is approximately inversely proportional to their diameters. 42,43 The other one-third corresponds to metallic species. Even just one metallic SWCNT (m-SWCNT) in the channel will shortcircuit the FET. In a competitive very-large-scale IC, SWCNTs are required to be of semiconducting purity >99.9999% and assemble into highly ordered monolayer arrays of high density with a consistent tube pitch of 5−10 nm (100−200 tubes/μm) (Figure 1d), to exhibit a high on/off ratio and sufficient driving ability without inefficient metal contacts and harmful intertube screening caused by poor alignment and bundling. 32,40,44 Furthermore, to minimize the device-to-device variation caused by differences in band gaps, s-SWCNTs with a narrow diameter distribution around 1.2−1.7 nm, 45 or better with a suitable chirality, are preferred. The goal in the controlled preparation of SWCNTs is to control the electrical structure, which is basically the process of band-gap engineering in the semiconductor industry. The primary target is to prepare highly pure s-SWCNTs by controlled growth and sorting (Figure 1e), and the ultimate goal is to prepare s-SWCNTs with identical band gaps (determined by chiralities) in adesirable range. The highest semiconducting purity achieved to date by controlled growth is close to 99.9%, 46,47 and the chirality purity is ∼97.4%. 46 The highest semiconducting purity achieved by sorting is >99.9999% through a multistep treatment with conjugated polymers. 32 Combining controlled growth and sorting techniques will be the solution to achieve both high semiconducting and chiral purity. Based on sorted dispersions of s-SWCNTs, various solution methods succeeded in realizing arrays with good alignment, but only a few achieved the density target. 32,34,48 In this Outlook, we will outline the efforts to prepare SWCNTs as materials for electronics and discuss the remaining challenges and opportunities. In the following sections, the methodologies, main progress, and opinions regarding the further development of structure-controlled growth, selective sorting, and solution assembly of SWCNTs will be demonstrated. In the end, we will summarize the present situation and future directions in the field. We expect that this Outlook could give an idea of the package solution of SWCNT preparation, inspiring the development of highperformance electronics. CONTROLLED GROWTH OF s-SWCNTS Currently, chemical vapor deposition (CVD) is the most widely used method to synthesize SWCNTs. Band-gap control and tube alignment are the two key issues in the synthesis of SCWNTs for electronic applications. The ultimate goal is to grow s-SWCNTs with ultrahigh purity and identical band gap. Arrays of good alignment and high density are also desired. Though it is still far from the target, important progress has been achieved, lighting the future pathway. Band-Gap Engineering in the Controlled Synthesis. 2.1.1. Selective Growth of s-SWCNTs by Etching and Twisting. Since m-SWCNTs have available density of states near the Fermi level, while s-SWCNTs do not, metallic tubes exhibit lower ionization energy and higher oxidizability ( Figure 2a). Taking advantage of this difference, many strategies have been developed to selectively prepare s-SWCNTs by inhibiting the growth of metallic nanotubes or etching them away. In 2009, Liu et al. discovered that horizontally aligned arrays of s-SWCNTs were selectively grown on quartz substrates when an appropriate amount of methanol was added to the ethanol feedstock, initiating a large number of explorations with similar strategies (Figure 2b−d). 49 The tubes showed a semiconducting selectivity of ∼95% and a narrow diameter distribution of 1.4−1.8 nm. The • OH radical was believed to play the role of etchant of m-SWCNTs. Diameter confinement from the quartz substrate was recognized as an essential factor that ensured the selective etching. 50 More etchants such as oxygen (Figure 2f), 51 water, 52 and isopropanol, 53 as well as plasma 54 and UV light, 55 have also been adopted. However, the growth window for a decent selectivity is normally very narrow; therefore, the CVD conditions need to be strictly controlled. Nonetheless, CeO 2 -supported catalysts have been shown to be very robust in the selective growth of s-SWCNTs ( Figure 2e). 56 Due to its oxygen storage capacity, CeO 2 can steadily maintain an oxidative environment and inhibit the growth of m-SWCNTs, guaranteeing reproducible selectivity. An intrinsic challenge of this etching-indispensable strategy is the trade-off between selectivity and yield. High selectivity can only be achieved when the etching effect is strong with low growth efficiency. When the content of s-SWCNTs was increased from 67% (nonselective) to 98%, the yield was reduced by a factor of ∼1000 as reported. 57 The yield could be increased through multicycle growth, 58 but was far from satisfactory. Jiang et al. developed a unique strategy to twist m-SWCNTs into s-SWCNTs by electro-renucleation during growth, achieving a selectivity as high as 99.9%. 47 The differences of formation energy between s-and m-SWCNTs during growth were significantly amplified by the reversal pulse of the electric field, thereby inducing renucleation of m-SWCNTs to s-SWCNTs ( Figure 2g). This strategy is suitable for the growth of horizontally aligned arrays of s-SWCNTs due to their identical growth direction parallel to the electric field. Selective Growth of s-SWCNTs via Chirality Control. The selective growth of nanotubes with chiralities of (n − m) MOD 3 ≠ 0 also gives s-SWCNTs. In 2003, the growth of SWCNTs enriched with (6,5) and (7,5) by Resasco et al. became the first chapter of chirality-controlled growth. 59 The selectivity was up to 55% toward (6,5) (Figure 3a). 60 The key to selectivity lies in the design of the bimetallic CoMo catalysts, in which the Mo species disperses and stabilizes metallic Co to form small and uniform nanoparticles. A similar strategy was extended to a variety of catalysts, which exhibited selectivity toward (6,5), (7,5), or (7,6). 61,62 Notably, Chen et al. used a sulfur-promoted Co/SiO 2 catalyst to selectively grow (9,8) nanotubes with an abundance of 33.5% (Figure 3b). 63 The diameter of (9,8) tube is 1.17 nm, which is larger than those of the aforementioned SWCNTs (0.75−0.83 nm) and more in line with the requirement of FET devices. They proposed that the involvement of Co 9 S 8 intermediates benefits the formation of uniform Co nanoparticles for selective growth. In addition to the catalysts, the chirality-dependent difference in growth kinetics may also take a role in chirality selectivity. Yakobson et al. have theoretically interpreted the kinetic favorability of near-armchair ((n,n−1) or (n,n−2)) and (2m,m) chiralities. 64 This means when the size distribution of the catalyst is restricted to a narrow range, it is possible to achieve enrichment of a specific (n,n−1), (n,n−2), or (2m,m) chirality under suitable CVD conditions. Because tubes of (n,n−1) or (n,n−2) chiralities are always semiconducting, their advantage in kinetics brings about great convenience in the selective growth of s-SWCNTs, which was validated by the selective growth of the aforementioned (6,5), (7,5) (7,6), and (9,8) tubes. 61,62 The enrichment of (2m,m) nanotubes was reported experimentally, 65,66 particularly semiconducting (8,4) tubes. 67, 68 Zhang et al. explained that the enrichment of specific (2m,m) nanotubes (up to 80% for (8,4)) came from the coeffect of symmetrical matching of the catalyst surface with tube ends and their advantageous growth kinetics ( Figure 3c). 65 Inspired by enzyme-catalyzed reactions, Li et al. designed intermetallic Co 7 W 6 catalysts with high melting point and unique crystal structure of lower symmetry than normal metallic catalysts. 69,70 Using such catalysts as epitaxial templates combined with optimization of kinetic growth conditions, semiconducting (14,4) tubes were selectively synthesized (Figure 3d−i). 46 The content of s-SWCNTs was 98.9%, among which 97.4% are (14,4) tubes. The purity was further improved to 99.8% for s-SWCNTs and 98.6% for (14,4) tubes by post-treatment of water vapor. The kinetically unfavorable (16,0) tubes were also synthesized at an abundance of nearly 80%. 71 This strategy has also been demonstrated with various catalyst precursors 72,73 and expanded to other intermetallic compounds. 74 The strategy of combining thermodynamic preponderance (using catalysts with unique atomic arrangements as structure templates) and kinetic control (manipulating growth conditions) has been shown to be powerful in synthesizing chirality-specific SWCNTs, 41 holding great potential in preparing s-SWCNTs with high purity. In the studies of chirality-specified growth of SWCNTs, the identification and quantification of tube chiralities and contents are also important and challenging. Li et al. developed some feasible methods relying on both spectroscopic and microscopic techniques. 41,69,70 Raman, Rayleigh scattering, polarized optical absorption, and selected area electron diffraction working together can give precise assignments to the chiralities of the tubes. Raman statistics and Raman combined with microscopic techniques, including AFM and SEM, can give reliable quantification of the contents of each chiralities. 2.2. Controlled Growth of Horizontally Aligned SWCNT Arrays. The alignment of nanotubes is generally achieved by introducing some external guiding force. Gas-flowguided growth and substrate-lattice-guided growth are the two main strategies. Gas-flow-guided alignment is based on the so-called "kite mechanism". A catalyst nanoparticle (together with a nanotube) floats in a gas flow above the substrate due to thermal buoyancy, and the orientation of the nanotube is thus guided by the direction of gas flow (Figure 4a,b). 75,76 With this method, the aligned SWCNTs reached a record length of 18.5 cm. 77 The orientation and shape of SWCNT arrays can be controlled by manipulating the flow field (Figure 4c,d). 78 The challenge here is increasing the density, because floating nanotubes easily form bundles when the density is high. In addition, few-walled CNTs are sometimes grown, which is undesirable for device applications. Substrate-lattice-guided alignment is based on the strong interaction between single-crystal substrates and SWCNTs. In 2005, the aligning effects of sapphire 79 and quartz 80 were discovered (Figure 4e,f). SWCNT arrays of excellent alignment were prepared with more than 99.9% of nanotubes lying within 0.01°. 81 It is generally accepted that few-walled CNTs will not grow with this method. Moreover, using the "Trojan" catalyst, Zhang et al. obtained SWCNT arrays of ultrahigh density (∼160 /μm) on sapphire substrates (Figure 4g). In the growth of s-SWCNT arrays, there is always a trade-off between purity and density. For high-density SWCNT arrays (≥100 tubes/μm), the highest semiconducting purity reported is 91%, 82 while for SWCNT arrays of high semiconducting purity (99.9%), the highest density is ∼11 tubes/μm. 47 Some post-etching methods have been developed to further increase the semiconducting purity of SWCNT arrays. Selective electrical breakdown by Joule heating is one of the most widely used methods, breaking down the m-SWCNTs while preserving s-SWCNTs by turning off via gate voltage. 83 In fact, the s-SWCNT arrays used in the CNT computer reported in 2013 was prepared by this method. 31 In order to enhance the removal of m-SWCNTs, Rogers et al. introduced a thermocapillary resist film above the SWCNT arrays ( Figure 4i). 84 All m-SWCNTs were exposed by Joule heating and then easily etched away by reactive ions. However, due to the nature of the thermocapillary flow, the spatial resolution of this method is limited to ∼100 nm. Maruyama et al. improved the spatial resolution to ∼55 nm by utilizing the exothermic oxidation of the organic films. 85 Nonetheless, it is still challenging to apply for ultradense SWCNT arrays. Moreover, the postetching inevitably leads to an increase in nonuniformity of the local SWCNT density, thereby increasing the performance variability of FETs and weakening its usability in large-scale ICs. 6 2.3. Summary of Controlled Growth of s-SWCNTs. The selective growth of s-SWCNTs by using etching agents to preferentially suppress the growth of m-SWCNTs has been widely demonstrated, but it is challenging to reach a high selectivity. However, the strategy based on electro-renucleation showed great potential in growing s-SWCNT arrays of high purity (99.9%). 47 For chirality-controlled growth through the synergy of using the unique intermetallic Co 7 W 6 catalyst and kinetic control, s-SWCNTs of high purity (98.9%) with 97.4% (14,4) species were synthesized. 46 This method offers a better uniformity of band gap. We expect that combining the electro-renucleation with catalyst design may result in much improved selectivity, which is well worth further exploration. A postgrowth treatment can further increase the purity of s-SWCNTs, though the conditions need to be finely tuned to balance purity and yield. Thus, an ultrahigh selectivity toward s-SWCNTs can be expected. However, taking the requirement of density into account, enormous efforts are still needed to establish feasible approaches to prepare s-SWCNTs for high-performance electronics. SORTING OF s-SWCNTS Although the discovery of SWCNTs occurred in 1993, 88 the separation of SWCNTs was not reported until this century. In recent years, the development of SWCNT sorting made the application of semiconducting or even single-chirality SWCNTs promising. Four of the dominating methods used to separate SWCNTs are density gradient ultracentrifugation (DGU), chromatography, selective extraction by conjugated polymers (SECP), and aqueous two-phase extraction (ATPE). (Figure 5b), polythiophenes, etc. Poly-[(9,9-dioctylfluorenyl-2,7-diyl)-alt-co-(6,6′-(2,2′-bipyridine))] (PFO-Bpy) allows the extraction of single-chirality (6,5) tubes. 95 In 2020, s-SWCNTs with >99.9999% semiconducting purity was achieved by repetitive sonication and filtration (Figure 5f). 32 Compared to the sorting processes in aqueous SWCNT dispersions, the extraction pathway offers higher purity, featured by better-resolved peaks and remarkably reduced baseline in the absorption spectrum (Figure 5d), although the yield is much lower. Due to the negative effect of the residual polymer on the device performance as well as the difficulty in polymer synthesis, removable/recyclable polymers, such as degradable polymers 99−102 and supramolecular polymers enabled by hydrogen bonding 103,104 or coordination 105 were also explored (Figure 5e). Chromatographic Separation. In 2003, ion exchange chromatography was adopted by Zheng et al. 106,107 to separate DNA-dispersed SWCNTs (DNA-SWCNTs). The specific interaction between DNA and SWCNTs resulted in the differential adsorption and retention of SWCNTs with different structures when they were eluted by a salt gradient. Single-chirality separation can be achieved by using specific recognition DNA sequences identified from the vast ssDNA library via a systematic search (Figure 6a). 108 Gel-based SWCNT separation was developed in 2009. 109,110 Kataura et al. remarkably enhanced the separation performance by using chromatography with columns of agarose gel or allyl dextran-based gel (Sephacryl). 111,112 The preferable adsorption of SDS on m-SWCNTs led to the earlier elution of m-SWCNTs with weaker interaction with the gel column and the separation of m-and s-SWCNTs (Figure 6b). 112 Similarly, the higher affinity of DOC (sodium deoxycholate) molecules to smaller-diameter s-SWCNTs was employed for diameter separation (Figure 6b). Overloading, 111 temperature, 113 pH, 114,115 and the addition of salts 114 or ethanol 116 could amplify the differential interaction between different (n,m) species, thus enabling high-yield and high-resolution chiral sorting (Figure 6c). 117 In addition, the sorting can be automatically performed on commercially available chromatography equipment, which is a big advantage of this method. 118 3.3. Density Gradient Ultracentrifugation. DGU was introduced into SWCNT soring by Hersam et al. in 2005. 119 Separation was achieved by the equilibrium sedimentation formed when the density of the dispersoids was the same as the density of the surrounding medium. In this system, dispersant−SWCNT hybrids are the dispersoids, whose density is determined by not only the intrinsic density of SWCNTs but also the surface coatings, 120 counter-ions, 121,122 hydration layers, 123 and encapsulated species inside the tubes. 124,125 Surfactants take important roles: for example, using sodium cholate (SC) as the dispersant alone led to diameter sorting, while using SC and SDS together led to mand s-SWCNT (M/S) sorting (Figure 6d). 120 By using nonlinear DGU with density gradient profile varying gently with depth, the resolution was significantly improved. Weisman et al. 126 130 and pH 131 separately or in combination, the competitive adsorptions of surfactants on SWCNTs were modulated to improve the sorting resolution. By tuning the oxidative condition, M/S-based and band-gap-based sortings were realized (Figure 7a,b). 130 The endohedral filling of SWCNTs further improved the sorting resolution, allowing the separation of large-diameter (13,7), (14,6), (15,5), and (16,3) tubes (Figure 7c). 132 The sequence-dependent interaction between DNA and SWCNTs enabled high-efficiency chirality sorting of SWCNTs. By carefully selecting DNA sequences, 23 singlechirality SWCNTs were isolated. 133 Machine-learning-guided screening of DNA sequences 134,135 greatly improved the efficiency and success rate (Figure 7d). The average molecular weights of phase-forming polymers also have a significant influence on the distribution of DNA-SWCNTs. 136 The sorting resolution can be improved by selecting suitable polymer combinations with the right molecular weights (Figure 7e,f). The sorting mechanism was interpreted by a solvation energy spectrum. 136,137 Different DNA-(n,m) species in a given DNA-SWCNT dispersion present different solvation energies, enabling the distribution variation in the two phases ( Figure 7g). Table 1, each of the separation approaches exhibits unique advantages and also its own challenges and opportunities toward the goal of separating high-purity single-chirality s-SWCNTs in a high concentration. SECP enabled the separation of s-SWCNTs with the highest semiconducting purity among the four methods. In addition, the high-efficiency and simple processing steps of SECP dramatically reduced the threshold for the application of SWCNTs in electronics. Up to now, most of the device studies used SECP-separated s-SWCNTs. However, the yield of separation and the chiral selectivity still need to be improved, 95,138 especially in the large-diameter regime. Advantages and Development Opportunities of Various Sorting Methods. As summarized in For sorting in aqueous solution, although the semiconducting purity of sorted SWCNTs was not as good as that of SECP, the efficiency in chirality-based sorting is very impressive. The advantage of chromatographic separation is its easiness in automation, but the concentration of SWCNTs directly obtained after separation is low. High-concentration SWCNTs can be directly obtained by DGU and ATPE. However, DGU relies on high-speed and long-term centrifugation; the throughput of a single-round separation is limited by the scale of centrifugation. For ATPE, specifically resolving DNA sequences allow high-efficiency and high-concentration sorting of SWCNTs. Nevertheless, further improving the separation resolution for surfactant-dispersed SWCNTs is urgent for expanding ATPE to the large-diameter regime, making it more compatible for separating SWCNTs for device applications. ASSEMBLY OF WELL-ALIGNED SWCNT ARRAYS FROM DISPERSIONS Despite a short history, the research on the alignment and assembly of SWCNTs has made rapid progress recently with the urgent need for array materials in electronics. Due to the dilemma of purity and density faced by the direct growth of SWCNT aligned arrays and the breakthrough in sorting that made it possible to obtain dispersions with extremely high semiconducting purity (up to 99.9999%), the assembly of monolayer arrays via solution processes has become the primary approach. As shown in Figure 8, various assembly methods have been developed according to different alignment mechanisms. Shear Alignment, 139 Matrix Shrinking, 140 and dielectrophoretic assembly (DEP) 141 rely on anisotropic flow, stress, and electric fields, respectively. Langmuir−Blodgett (LB), 142 Langmuir−Schaefer (LS), 143 evaporation-induced self-assembly (EISA), 23 floating evaporative self-assembly (FESA), 144 tangential flow interfacial self-assembly (TaFI-SA), 145 dimension-limited self-alignment (DLSA), 32 and binary liquid interface-confined self-assembly (BLIS) 34 utilize interfaces and contact lines to facilitate co-orientation. Spatially hindered integration based on a DNA template (SHIDT) 48 and Shear on Patterns 146 design the interactions between SWCNTs and patterned substrates. The arrays prepared by these methods present densities of 25−500 μm −1 and the twodimensional order parameter 147,148 S 2D > 0. 75. Currently, few methods practically reach the density target for device applications (100−200 μm −1 , marked in green in Figure 8). The DLSA/BLIS methods 32,34 developed by Peng et al. achieved tube densities beyond 120 μm −1 . It was proposed that the assembly encountered three procedures: SWCNT confinement at the liquid−liquid interface, pre-assembly, and deposition along the contact line while slowly pulling the wafer-scale substrates out of the dispersions (Figure 9a,b). Hydrogen bonding might have an important role in confining and pre-assembling SWCNTs. Top-gated FETs fabricated on these high-density SWCNT arrays showed better performance than commercial silicon FETs with similar gate lengths ( Figure 9c). The SHIDT method 48 developed by Sun, Yin, et al. used DNA origami to form nanotrenches, in which the energy advantage generated by geometric confinement and DNA hybridization promoted the selective deposition of SWCNTs (Figure 9d,e). The highest density was 96 μm −1 , with a uniform pitch and near-perfect local alignment of nanotubes. However, the scalability of DNA origami is still challenging and the cost is very high. Cao et al. prepared bilayer arrays of SWCNTs with an ultrahigh density of 500 μm −1 per layer based on the LS method. 143 Yet the FET performance was not up to expectations because of insufficient electrode contacts and severe intertube screening. Anisotropic reorientation and controlled pre-aggregation are two key procedures to reach good alignment and high density, respectively ( Figure 10). In previous works, researchers focused more on the former. They reoriented nanotubes through physical, chemical, or topological strategies, but the densities were generally 25−50 μm −1 , possibly due to the low tube concentration of the dispersions. Continuing to increase the concentration may lead to undesired tube bundling. In contrast, the LS method 143 physically compressed the water surface, and the DLSA/BLIS methods 32,34 chemically formed a potential well with hydrogen bonds. They both promoted the aggregation to raise the effective concentration of SWCNTs at the gas−liquid or liquid−liquid interface, increasing the array density to 120 μm −1 and above without forming large-scale bundles. We can conclude that, to increase the density of arrays, much attention should be paid to enabling the controlled pre-aggregation of SWCNTs during assembly. However, the aggregation efficiency is still unsatisfactory, which greatly prolongs the assembly time. In addition, to further enhance the feasibility of solution assembly methods, the following challenges need to be addressed. First, the impact of interfacial fluctuations on the alignment and uniformity of SWCNTs should be optimized. The widely used FESA method 144,149 developed by Arnold, Gopalan, et al. utilized tangential flows to reorient nanotubes along the oil−water−solid contact line. The pinning effect led to intermittent jumps rather than a continuous movement of contact lines across the pulled substrates, which created a sequential deposition of well-aligned strips and perturbed interfacial regions with random networks of nanotubes. In 2019, Rutherglen et al. significantly improved the order of nanotubes in the perturbed interfacial regions by reducing surface waves on the water subphase through isolating air flow, reducing vibration, and operating in a cleanroom. 150 Similarly, the LS method produced arrays with better alignment than the LB method because the horizontal transfer was less disturbing to the SWCNT Langmuir film on water. On the other hand, controllably applying interfacial fluctuations to form anisotropic potential fields may also benefit the assembly of nanotubes, as revealed by early research using surface acoustic waves. 151 Second, the effect of dispersants and solvents on the assembly process should be elucidated. The composition of SWCNT dispersions is complex and diverse. Both dispersants and solvents will affect the interactions between nanotubes and substrates, especially surfactants that significantly change the properties of surfaces and interfaces. Therefore, many assembly methods developed in specific dispersing systems have poor versatility. For example, different aqueous dispersions showed different pH ranges for deposition on poly-L-lysine-modified silicon substrates. 152,153 The adsorption of PFO-BPy-wrapped SWCNTs on several modified silicon substrates was less favorable in toluene than in chloroform. 154 The PCz-wrapped nanotubes in 1,1,2-trichloroethane dispersions and the poly[2methyl-7-(6′-methyl-[2,2′-bipyridin]-6-yl)-9-(2-octylonoyl)-9H-carbazole] (PCO-BPy)-wrapped nanotubes in m-chlorotoluene dispersions were hardly deposited on silicon wafers via random adsorption, thus avoiding damage to the DLSA or BLIS process. 32,34 The mechanisms responsible for these differences have not been fully investigated. Third, the development of assembly methods based on aqueous dispersions should be promoted. There have been few studies on the array assembly from aqueous dispersions and no large-scale uniform features other than discrete domains, 23,141,155 partially because of the disadvantages of aqueous dispersions such as complex composition, short nanotube length, small nanotube diameter, and low semiconducting purity. However, due to the compatibility of aqueous dispersions for the single-chirality sorting process, the assembled arrays still possess interesting performance, such as polarized light emission, 23 which is worthy of further investigation. SUMMARY AND OUTLOOK From the above demonstration and discussion, it can be concluded that the practical application of SWCNTs in highperformance electronics must be based on the full development of structure-controlled growth, selective sorting, and solution assembly steps, in which great efforts over the past 25 years have led to significant progress. For the selective growth of s-SWCNTs, the strategy based on electro-renucleation of m-SWCNTs into s-SWCNTs showed great potential in growing aligned s-SWCNTs of high purity (99.9%). 47 Chirality-controlled growth through the synergy of thermodynamic control using a unique intermetallic Co 7 W 6 catalyst as an epitaxial template and kinetic control obtained both high semiconducting and chirality selectivity, resulting in materials with a better uniformity of band gap. 46 In addition to the high selectivity, achieving alignement will be an additional advantage for application. There is still another challenge of scalable production. For sorting, s-SWCNTs with a semiconducting purity of 99.9999% and more than 30 types of single-chirality s-SWCNTs have been separated. However, the separation of large-diameter (>1.2 nm) single-chirality s-SWCNTs is still a challenge. In the current stage, SWCNT sorting in the aqueous phase faced the common problem of short tube length and insufficient semiconducting purity. The development of a less destructive dispersing procedure and improvement of the M/S sorting resolution are crucial. As to SECP, more efforts should be made in the polymer structure engineering for higher yield and better selectivity toward larger tubes. For array assembly, high densities of 100−200 μm −1 with nearly perfect alignment (S 2D ≥ 0.95) of nanotubes have been achieved with specific methods. 32,34,48 However, low density is still the limiting factor for most methods toward practical applications, which is expected to be improved by efforts in promoting the controlled pre-aggregation of SWCNTs. Other challenges lie in optimizing surface and interfacial fluctuations to improve the uniformity of arrays, clarifying the effects of solvents and dispersants in assembly to enhance applicability for different dispersions, and developing aqueous assembly methods to expand the application of SWCNT aligned arrays with high chiral purity. At the current stage, a critical challenge for the controlled preparation of SWCNTs is the integration of growth, sorting, and assembly. These three processes should be revisited and studied in a whole chain and optimized synergistically. The status and chirality distribution of grown SWCNTs will affect their dispersion and the efficiency of sorting as well as the utilization ratio of SWCNTs. The choice of assembly method must be based on the solvents and dispersing agents of sorted SWCNTs. The length and surface potential of sorted SWCNTs and the fluidic properties of the solution will affect the results of assembly. In addition, for the sake of reducing the variability of SWCNT FETs, which is a vital constraint of integration, more attention should be paid to the reproducibility of growth, sorting, and assembly, as well as the uniformity of the prepared arrays. SWCNT preparation should work closely with device design and fabrication, forming an entire iterative cycle. Only in this way can the long-term progress of SWCNT-based ICs be promoted. In addition, the lack of feasible characterization methods to quantify the high semiconducting purity of SWCNTs has become a crucial constraint. For now, the only way to quantify a semiconducting purity higher than 99.9% is through fabricating FET devices with the SWCNTs and analyzing their transport characteristics, which is not only complicated but also disruptive. Moreover, an accurate analysis requires a valid determination of the length distribution and density, as well as the alignment of SWCNTs. Establishing reliable nondestructive quantification methods of good accuracy, high efficiency, and nanoscaled resolution for wafer-scale samples is a real necessity for further development of this field. We believe that the incorporation of scanning probe microscopy may shed light on a solution to this issue. In the past 25 years, SWCNT-based electronics thrived from a single FET into a microprocessor of large-scale integration. 33 The superior performance was demonstrated from the single FETs at extreme scaling 10 to the level of ICs. 32 SWCNTs have shown great potentials in both high-performance microprocessors and thin-film devices. 1 High-quality SWCNT materials in practical availability is the prerequisite for electronic applications. We believe the future of CNT-based electronics lies in the development of SWCNT preparation. Author Contributions The manuscript was written through contributions of all authors. All authors have given approval to the final version of the manuscript. Y.C. and M.L. contributed equally.
7,445.2
2022-11-01T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Fluorescence lifetime imaging for the two-photon microscope: time-domain and frequency-domain methods . Fluorescence lifetime images are obtained with the laser scanning microscope using two methods: the time-correlated single-photon counting method and the frequency-domain method. In the same microscope system, we implement both methods. We perform a comparison of the performance of the two approaches in terms of signal-to-noise ratio (SNR) and the speed of data acquisition. While in our practical implementation the time-correlated single-photon counting technique provides a better SNR for low-intensity images, the frequency-domain method is faster and provides less distortion for bright samples. © 2003 Society of Photo-Optical Instrumentation Engineers. [DOI: 10.1117/1.1586704] Introduction In complement with the emission spectrum, the determination of the lifetime of the excited state is a commonly used technique to characterize the emitting molecular species. In the context of fluorescence microscopy, fluorescence lifetime imaging ͑FLI͒ can provide a new contrast mechanism to help identify the local environment of the fluorophore. In addition, FLI can enable the quantitation of the relative concentration of a number of species that are colocalized. FLI has been developed in several laboratories. Among the most common applications is the determination of ion or other small ligand concentration using lifetime-sensitive dyes, the determination of oxygen concentration in cells and the quantitation of Förster resonance energy transfer ͑FRET͒ for distance measurements in the nanometer range. Two alternative methods are primarily used for the measurement of the fluorescence lifetime. One method is referred to as the time-domain method and it is based on constructing the histogram of photon delays using the time-correlated single-photon counting ͑TCSPC͒ method. The other method is generally referred to as the frequency-domain method and it consists of measuring the harmonic response of a fluorescent system using either sinusoidally modulated excitation light or a fast repetition pulse train laser. The TCSPC method is intrinsically a digital method wherein the detector measures one photon at the time. However, the time delay is measured using an analog detection method ͑time to amplitude converter͒ followed by fast conversion to a digital form. The frequencydomain method is intrinsically an analog method ͑although the waveform is digitally recorded͒ and the detector delivers a current proportional to the light intensity. These two different methods have been extensively described in the literature in connection to measurements of fluorescence lifetime in a cuvette and we will not review the principles in this paper. For the interested reader, we suggest a number of review articles or books included in the references ͑see for example the series of Topics in Fluorescence Spectroscopy [25][26][27] ͒. More recently, both methods have also been used for the determination of fluorescence lifetime in the microscope environment. 28 -32 In this paper, we discuss our implementation of lifetime measurements in the laser-scanning microscope using both frequency-and time-domain methods and the comparison between them. Lifetime measurements in the microscope using time-resolved cameras have also been previously discussed ͑see, for example, Refs. 12 and 14͒. Since the camera per se cannot provide the time resolution required for fluorescence lifetime measurements, these devices are generally used in conjunction with an image intensifier that acts as a fast shutter. This shutter can be thought of as a modulation of the gain of the detection system and in our classification falls under the general category of the analog detection mode. The same principle, namely, the modulation of the gain of the detector, can be used in a single-channel detector. Note here that the kind of gain modulation that is applied to the detector does not change the analog nature of the detection system. Therefore, rather arbitrarily, we will also include in the category of frequency-domain detection these methods that employ a narrow opening of the detector gain to sample the decay curve some time after the excitation. In practice, this scheme of modulation increases the harmonic content of the detected signal at the expense of the duty cycle. The advantages of this approach were previously described, for both cuvette experiments and the microscopy environment. 19,33 In this paper, we discuss only sinusoidal modulation of the detector gain, although the use of other functions could offer significant advantages when the sample contains multiple species with different lifetimes. The lifetime approach in imaging is generally different from that in the cuvette. In a cuvette measurement, we are interested in the accurate measurement of the fluorescence decay with the purpose of determining the number of different component species contributing to the decay or specific mechanisms involved in the deactivation of the excited state. In imaging, we are interested in resolving one or two components and in using the lifetime parameters as a means to contrast the image or to determine the locations in the image in which a specific excited state reaction occurs. Therefore, the instrumentation for lifetime measurements in a cuvette differs somewhat from that used for imaging. The problem of determining the fluorescence lifetime during the small time the laser scans a pixel in the image is similar to the problem of determining the lifetime during a stopped-flow measurement or in the flow-cytometer environment. 24 However, in both the stopped-flow and the flow cytometer, a relatively large fluorescence signal is measured. For example, in the flow cytometer, the fluorescence is collected from an entire cell, while in FLI the fluorescence is collected from each pixel of the image. A pixel is generally much smaller than a cell and, more importantly, contains fewer fluorophores. Another area of recent development is related to the measurement of the fluorescence lifetime of single molecules, either immobilized or freely diffusing in solution. The challenge of the lifetime determination during very short acquisition times can be expressed in terms of the total intensity collected during a small amount of time. The purpose of this study is to determine the various regimes of operation and to perform experimental observations to determine how the best SNR can be obtained for the two different lifetime approaches ͑digital versus analog͒. For this purpose we assembled a laser-scanning microscope system 34,35 based on two-photon excitation 36 -39 in which we can make lifetime measurements using both techniques with acquisition times as short as 50 s. We describe a new analysis of the frequency-and time-domain methods in the same microscope so that a meaningful comparison between the two methods could be achieved. We first present simulation experiments to demonstrate the analysis technique and to illustrate the effect of photon-counting statistics on the precision of lifetime measurement. We then compare the two approaches, using cuvette-type experiments with a homogeneous solution sample and stationary excitation beam. Finally, lifetime images are presented. Although the light source, sample, and microscope are the same in our system, the detectors used for the time domain and the frequency domain are different. In the frequency domain, we used the Hamamatsu R928 photomultiplier with rf modulation at the second dynode, while in the time-domain experiments we used the Hamamatsu R7400, wired for photon-counting operation. To perform a comparison between the two methods for the cuvette experiments, we transformed the time-domain decay to the frequency domain by calculating the fast Fourier transform ͑FFT͒ of the time decay data so that the measurements could be directly compared in terms of phase and modulation accuracy. However, data analysis to recover the decay parameters for the time-domain data was also performed directly in the time domain to preserve the proper information about the data statistics. For the imaging experiments, again we transformed the time-domain decay to the frequency domain for each pixel. This operation enables the use of analytical expressions for the lifetime of a multicomponent system. This is an important difference in our implementation, since the literature for the time-domain approach in FLI describes the use of look-up tables and other semiempirical techniques to recover the lifetime of multiple components. 8 A conclusion of our study is the rather obvious observation that the quality of the recovered data depends only on the SNR, which is essentially determined by the number of photons collected in both the time and the frequency domains. When the light intensity is relatively large, over 10 6 photons/s, the digital method of data acquisition intrinsically limits the rate of photon acquisition in the time domain. Since in the frequency domain, the detection system operates in the analog mode, this limitation does not occur. This is an important consideration for FLI, as explained later in this paper. For very low signal intensity, the discrimination capability of the single-photon-counting method provides a better SNR. However, other factors play an important role in improving the SNR at low-light-intensity levels in the frequency domain. Cuvette Setup All experiments were performed using a two-photon excitation microscope with a 1.3 numerical aperture ͑NA͒ oil objective. For the cuvette experiments, we used an eight-well slide holder with two wells filled with the sample and the reference solution, respectively. In the sample well, we made a dilution study of fluorescein in PBS buffer at pH 8 over the range from 1 nM to 100 M. A fixed excitation power was used such that a large range of emission intensities were measured. In the reference well, we used solution of dimethyl-POPOP ͑lifetime 1.45 ns͒. The excitation source was a Tsunami mode-locked titanium:sapphire laser ͑Spectra Physics, Sunnyvale, Califor-nia͒ with a repetition frequency of 80 MHz and a pulse width of about 100 fs. The optical path for the two-photon system has a dichroic mirror to separate the excitation light ͑generally in the 800-nm region͒ from the emission ͑in the interval 450 to 700 nm͒. In front of the detector, we used a filter ͑BG39, Schott glass͒ to block the scattered light ͓and/or secondharmonic generation ͑SHG͔͒ at the near-IR excitation wavelength. For the cuvette experiments, the beam was held stationary at the center of the cuvette. Microscope Setup The experimental setup is essentially the same used for the cuvette experiment, but the sample consists of a cell that expresses the enhanced green fluorescent protein ͑EGFP͒. In this case, the laser beam was raster-scanned across the sample to obtain an image 256ϫ256-pixels wide with a residence time of about 200 s/pixel. In some cases, several images were averaged to obtain an effective larger count per pixel. Simulation Experiments The purpose of this section is to determine how the total number of counts affects the recovered lifetime in the regime of relatively few counts in the decay curve and to test the methods of recovering the lifetime value under this condition using the FFT of the time-domain data. All simulations were per-formed in the time domain, but the data were processed and binned as if they were acquired in a frequency-domain instrument. The FFT method has been extensively used in the deconvolution of the lamp response for time-domain analysis. 40 It is no longer used due to the speed of computation of modern computers and because, for the FFT operation, it is difficult to correctly propagate the statistics. In the FLI context, we propose this approach because it is fast and provides a relatively simple way to recover pixel lifetime values up to two to three components. 41,42 Simulations were performed to mimic the emission of a solution 10 nM of fluorescein, which under normal circumstances in our instrument, shows a single exponential decay of about 4 ns. The count rate for this sample ͑which ultimately will depend on the laser power and the collection optics͒ was simulated to be about 4 kHz, which is adequate for obtaining good statistics in both frequency-and time-domain techniques in cuvette-type experiments. First, we show the principle of the method in a singlechannel simulated experiment. A typical simulated decay of fluorescein ͑4 ns, blue dots͒ and for the decay of a theoretical standard compound ͑1.0 ns, green dot͒ is shown in Fig. 1. A fit obtained using standard time-domain analysis 8 is shown in Fig. 1 ͑solid lines͒. For this simulation, the sample curve contains about 4000 counts and the reference curve has about 2000 counts. The recovered result is 4.2Ϯ0.1 and 0.94Ϯ0.05 ns for a single exponential component resolution of the decay. The simulated raw data set was fast Fourier transformed and it is presented in a typical frequency-domain format in Fig. 2. The FT ͑real and imaginary parts converted to phase and modulation values͒ of the sample and reference decays are shown in Fig. 2 in a log frequency axis for the 4-ns ͑red symbols͒ and the 1-ns ͑blue symbols͒ decays, respectively. The modulation curve for both decays is relatively smooth and close to the expected value ͑solid lines͒ up to about 1000 MHz. Instead, the phase curve shows large deviations from the expected monotonically increasing curve starting at about 80 MHZ. However, if we calculate the relative phase between the two signals corrected for the finite lifetime of the refer-ence and the modulation ratio ͑corrected also for the lifetime of the reference͒ we obtain the points ͑green points͒ of Fig. 2. The relative phase and the modulation ratio follow the expected trend up to about 1000 MHz. This simulation shows that the deconvolution of the lamp response ͑approximately 300 ps for the Hamamatsu R7400 detector used in this study͒ is necessary to correctly recover the decay even for the relatively narrow lamp pulse used for this simulation. Using the Globals WE software program ͑Laboratory for Fluorescence Dynamics, University of Illinois͒, we analyzed the phase and modulation curves obtained from the timedomain data after referencing and we obtained the fit shown in Fig. 2. The recovered lifetime is 4.3Ϯ0.1 ns, in good agreement with the time-domain analysis of the same original data set. However, the residues are larger than 2 to 3 deg on the lowfrequency part of the curve and they increase in the highfrequency part. It is clear that the residues are quite large for standard frequency-domain data and that the overall fit is good only up about 100 MHz. This simulation shows that even for a decay curve containing on the order of 4000 counts, the frequency range available is limited to about 100 MHz. In summary, this analysis method shows the useful bandwidth of a time-domain measurement given a particular number of photons collected. We repeated the simulation for a factor of 10 less counts for the 4-ns sample ͑400 counts total͒. The time-domain data are shown in Fig. 1 and the time-domain data set after the FFT is shown in Fig. 3. For this count rate, the deviation from the expected result ͑solid curve͒ is severe everywhere and particularly over 100 MHz. This result is in agreement with our estimation that no more that one or two harmonic frequencies can be used for the determination of the lifetime in a pixel. Next, we show that it is possible to calculate the lifetime value from the time-domain data using a very rapid procedure based on exact frequency-domain formulas. Although the fit of the decay in the time-and the frequency-domains correctly recovers the lifetime values, the least-squares procedure with lamp deconvolution used to recover the lifetime value is prohibitive in terms of computer time for the analysis of an image that contains of the order of 10 5 pixels. Furthermore, when the counts are very low, the In Table 1, we report the values of the phase and modulation and the values of the lifetime calculated using the phase and lifetime calculated using the modulation values. In the frequency domain, these values are known as phase and modulation lifetimes and they should be identical for a single exponential decay. In Table 1, the calculation was done for successive harmonics calculated by the FFT algorithm. Up to about 78 MHz the recovered lifetime values by the simple formulas of Eqs. ͑1͒ and ͑2͒ are relatively close to the expected values, but the difference becomes larger at higher frequencies. When there are enough counts in the decay curve, it is possible to use a formula derived originally by Spencer and Weber 44 for the determination of two lifetime components given the phase and modulation at the fundamental frequency and either the phase or the modulation at the second harmonics. We analyzed the data set containing 4000 counts using Weber's formula. This formula uses two frequencies and provides two lifetime values and the fractional intensity. The first harmonic frequency and a successive harmonic frequency are used as shown in Table 2. For this data set, the recovered values should give only one component. The fractional intensity of the 4-ns component is close to one and the negative lifetime obtained for the second component with very small amplitude is due to the noise in the data. This example shows that it is possible by a simple inversion formula to recover two lifetime values and the relative fraction per pixel. This solution is exact and does not requires minimization or the use of look-up tables. Discussion of the Simulated Experiments There are various issues regarding this comparison of the time-domain data analyzed in terms of familiar frequencydomain terms. Although the purpose of this study is to assess the statistical significance in the low-count regime, we can also compare time and frequency domains in the highcounting regime in our simulations. Of course, if more counts are collected, the time-domain and the frequency-domain analyses should be equivalent. We simulated the data again ͑not shown͒, but this time we increased the total counts by a factor of about 100 ͑400,000 counts total͒. As expected, the noise was reduced and the time-domain curves when fast Fourier transformed in the frequency domain behave regularly at high frequencies ͑up to about 1000 MHz͒. This indicates that the deviation ͑systematic and random͒ from the expected results with relatively few counts was due to statistical errors. Our conclusion is that for cuvette experiments, using integration times of several seconds, we could always obtain a reasonable total counts in the time domain and that the differences between the time domain and the frequency domain are marginal. The number of photons in the decay curve required to obtain the precision of the direct phase determination ͑0.2 deg and 0.004 for the phase and the modulation, respectively͒, which is normally achieved in the frequency domain is over 1 million counts. In a laser-scanning imaging instrument, this is unlikely to be achieved due to the limit of pixel dwell time and the speed of data acquisition of the photon-counting detector. Instead, in the low count regime, the total number of counts collected determines the frequency range that can be usefully employed. In practice, only a very restricted frequency range can be employed and for most applications in FLI, one or two frequencies ͑the fundamental laser repetition frequency and the second harmonic͒ are sufficient. The higher harmonics have a very large error. Fluorescence Lifetime Imaging in the Laser-Scanning System As we stated, for FLI the most important consideration is the number of counts that we can reasonably expect in 1 pixel. An upper limit for integration in a pixel is determined by the time required to collect a frame. It is reasonable to integrate over a pixel for about 100 to 200 s, which corresponds to a frame rate of about 6.5 to 13 s for a 256ϫ256 image. In several experiments done with our acquisition card ͑model B&H 630͒, we were limited to sustained data acquisition rates of about 2 MHZ. Note that this average rate is also close to the maximum instantaneous rate. This is due to the dead time of the card, which is estimated to be about 150 ns and to other limiting factors due to data transfer. We experimentally determined the maximum ͑and mini-mum͒ rate of data acquisition of our system both for the frequency domain ͑analog detection͒ and for the time domain ͑photon counting detection͒. Figure 4 shows the counting rate or photocurrent effectively measured by our system as a function of the fluorescein concentration in the cuvette for a fixed excitation power of 5mW ͑2-photon excitation͒. For the 1.3 NA objective used, at about 10 nM the average number of molecules in the two-photon excitation volume is of the order of 1. It is interesting that the behavior of the time-and frequency-domain detection systems differ at low and at high fluorescein concentrations. At a low fluorescein concentration, the TCSPC system is linear at least up to 1 nM fluorescein ͑or 3000 counts/s͒. The linearity should continue until we reach the limit of the detector background count that, in our case, was about 100 counts/s. However, at a high concentration when the counting rate reaches about 2,000,000 counts/s the electronics saturate and the counts cannot be further increased. On the contrary, we did not notice saturation at high counting rate for the frequency-domain analog detection mode. At a low fluorescein concentration the analog photocurrent saturates due to the background current that is not efficiently reduced. In this measurement, we have not made any attempt to subtract the dark current to increase the linearity at a low concentration of the analog system. Using this upper limit (2ϫ10 6 ) for the counts the card ͑B&H630͒ can process and the sensitivity of our laser scanning microscope, we can estimate the concentration of fluorophores that will saturate the detector in our microscope system. In our instrument, we routinely obtain 30,000 counts/s molecule Ϫ1 of fluorescein when the laser repeats at a frequency of 80 MHz. Under these conditions, about 60 fluorescein molecules in a pixel should be sufficient to produce electronic saturation of the photon-counting card. If the local concentration in 1 pixel exceeds the equivalent of 60 fluorescein molecules, the only possibility is to either reduce the laser repetition or to reduce the laser power, which results in a reduction of the statistics ͑per equal acquisition time͒. Of course, in many cases the fluorescence is weaker and the single-photon-counting card could handle the counting rate without saturation. However, we estimate that for typical pixel residence times ͑100 s͒ we cannot exceed about 2000 counts at the brightest pixel without saturation of the instrument electronics. Since 2000 counts are sufficient to determine one lifetime component but result in a relatively large error for the resolution of two components, in the following we concentrate on the single-component lifetime determination and the resolution of, at most, two components. SNR in the Frequency and Time Domains There is an important issue in relation to the differences between the time domain and the frequency domain. Although we showed that both detection systems respond linearly to the fluorescence in a given intensity range, the two methods differ in the SNR ratio. For a comparison of SNR using the two approaches, we measured the standard deviation of the phase ͑and modulation͒ signal by direct measurement in the frequency domain for the analog system and by converting the time-domain data to the frequency domain for the TCSPC card. The phase measurements repeated at several fluorescein concentrations are shown in Fig. 5. At all intensity regimes studied, per equal fluorescein concentration and laser intensity, the time-domain measurement gave smaller phase deviations. We expect both systems to show a standard deviation that varies with the square root of the intensity. However, the ratio between the standard deviation of the phase data in the time domain and the frequency domain varies from about 3.0 at low counts, where the analog detection is dominated by the photomultiplier dark current, to about 1.5 at a high count rate. This convergence at high intensity is due to saturation of the photon-counting card and the consequent lack of improvement in signal standard deviation. While the main reason for the difference in SNR of the two acquisition systems is due to the noise discrimination of a photon-counting system, there is also a contribution due to the difference in quantum efficiency of the two detectors used. To provide an example of what to expect in a typical microscope experiment, we show in Fig. 6 the histogram of phase values calculated after fast Fourier transformation from TCSPC data acquired with a pixel residence time of 1 ms for the 1-M fluorescein solution using our two photon microscope. The total counts collected for each decay determination was about 1000 per pixel. The standard deviation of the phase measurement is a about 2.4 deg, which corresponds at 80 MHZ to 0.4 ns. Table 3 shows the harmonic analysis of the FFT timedomain data averaging 4 neighbors pixels selected in a region of the image. As expected, only a few harmonic components can be reliably used at this count level. We performed a fit of this frequency-domain-translated decay using the Globals WE software. The recovered average lifetime is 4.1Ϯ0.7 ns and the residues were very large. A similar result is obtained if we perform the fit directly in the time domain. FLI of Cells Expressing EGFP In this section, we report first FLI of YFP's constructs with the PAT4 protein in the muscle of C elegans. This sample was provided by Dr. B. Williams of University of Illinois. On the same worm, we performed time-domain and frequencydomain measurements. Although the laser conditions are the same, the image area is not identical in the two examples. In this study, the image intensity is very low and we should be in the low counting regime discussed in the previous sections. Figure 7 ͑see Color Plate 1͒ shows the TCSPC FLI image obtained with the B&H630 card. The pixel time for this image was 1 ms obtained by averaging 10 frames with a pixel time of 100 s. The instantaneous maximum counts per pixel ͑about 40,000 counts/s͒ should always be below the saturation limit of the card. Only pixels with at least 30 counts were analyzed. Pixels were not averaged together. The FLI images are presented in terms of and m , according to the discussion in the previous section. The lifetime is relatively homogeneous across the image, although the intensity varies from 30 counts ͑minimum͒ to about 400 counts ͑maximum͒. The histogram of lifetime values are centered at 2.7 and 3.5 ns for the phase and modulation determination, respectively. This is expected since in a nonhomogeneous system, the phase lifetime is always less that the modulation lifetime. The standard deviation of the lifetime determination is about 1 ns, as expected from the photon statistics of about 100 counts per decay curve. The frequency-domain FLI measurements are shown in Fig. 8 ͑see Color Plate 1͒. The values in Fig. 8 are comparable to the values in Fig. 7. The integration time per pixel was 0.8 ms for Fig. 8. For the FLI analysis, only points with at least 2 nA were analyzed. The average lifetime histograms are comparable with those of Fig. 7. However, the relative standard deviations of the lifetime histogram is larger for the frequency-domain determination, in accord of the smaller SNR of the frequency-domain measurements in the lowcounting regime. Next, in Fig. 9 ͑see Color Plate 2͒ we show FLI images of the construct AK-EGFP in HeLa cells. Left is the intensity image and right is the phase image. The image was obtained at a rate of 100 s/pixel. Intensity and phase histograms from a region of about 4600 points in the interior of the cell are also shown in the bottom part of the figure. In this case, the intensity histogram in the frequency domain is centered at about 1000 nA. The intensity expressed in current is independent on the pixel residence time. Using the curves of Fig. 4 an from which we derive that 1 nA is approximately 10,000 counts, our estimate of the equivalent instantaneous counting rate per pixel is about 10 7 photons/s for the bright spots of this image. Note that the counting rate for this image is well above the upper limit permitted by our TCSPC card. The effective number of photons detected in 100 s for the bright spots is about 1000. The phase histogram shows a standard Figure 10 ͑see Color Plate 2͒ schematically shows the different ranges of operation for the two fluorescence lifetime measurement methods. The green region, which is the region of operation of the TCSPC electronic, is drawn assuming that at most 10 6 counts can be processed per second. This is at the limit of our particular electronics ͑B&H model 630 card͒. We note that faster electronics are now available. Our only purpose is to show that there is a limit of counts above which the TCSPC detection will saturate. Discussion In a log-log plot the differences between different acquisition electronics and manufacturers give lines that are almost one on top of one another. For the frequency-domain method, this boundary line due to the recovery time for the card does not exist. Another important line in this figure is the time for pixel acquisition. We choose as a limit 400 s/pixel that gives about 26 s for frame acquisition for a 256ϫ256 image. This limit is arbitrary, since in principle the time spent in a pixel can be made longer. However, in our experience using living cells, 26 s is considered an upper limit. The frame size can also be reduced at the expenses of either image resolution or field of view. Again, the graph of Fig. 10 should serve only as a reference. Another important family of lines is the number of molecules per pixel. The number of molecules is estimated using fluorescein as an example and assuming that one molecule of fluorescein produces about 30,000 counts/s. Of course, this figure depends on the microscope setup, the laser intensity, and other instrumental parameters. Using this estimation, the graph shows that above about 30 to 40 molecules/ pixel, the TCSPC method cannot keep up unless the average laser intensity is reduced. In practice, what is usually done when using the TCSPC method is to reduce the laser repetition rate. Instead of operating the laser at a high repetition rate ͑generally 80 MHz͒, the laser frequency is divided by a factor ͑generally 10 or 20͒ using a pulse picker or a cavity dumper. Apparently, there is very little penalty in performing this operation in the context of the TCSPC method, since the acquisition rate in the TCSPC method is determined by the speed of the electronics. In fact, most TCSPC systems operate either at 4 MHz laser frequency or less. When comparing the timedomain and the frequency-domain methods, the same amount of photons detected can be obtained during a much smaller integration time in the frequency domain operating at 80 MHz since the detector does not saturate. This is crucial for the microscope environment where the total duration time of frame acquisition cannot be very large. If the number of fluorophores per pixel is below 30, then the TCSPC method can process all the data, and in this regime, the time-domain method provides a better SNR. Another useful set of lines in this figure is the family of lines for the required counts to resolve multiple components. In our estimates, we determined that we need at least 100 counts in the histogram to obtain a lifetime with an uncertainty of about 20%. To resolve two components, under ideal circumstances, i.e., when the lifetimes of the components are well separated and the two components have comparable intensity, we require about 1000 photons to resolve the system. For 3 components, we require a factor of 10 more. These are only order of magnitude estimations and the actual number of counts required varies depending on the separation and relative intensity of the lifetime components. In normal practice for TCSPC measurements in a cuvette, much larger numbers of counts are collected in a decay curve because a better precision in the resolution of multiple components is the goal for cuvette measurement. Inspection of Fig. 8 shows that the TC-SPC method could separate two components in the microscope environment only for very long frame integration times. In fact, the region in cyan color in the figure is the region in which the TCSPC method could be used in the microscope. Clearly, this is a small region of the counts per pixel time space. In many situations in microscopy, we are dealing with 100 or 1000 molecules in a pixel. This region cannot be reached with short integration times using the TCSPC method. The laser repetition rate or the laser intensity must be attenuated to fulfill the counting rate requirements of the TC-SPC electronics. The frequency-domain technique is not limited by a high count rate. Conclusions In principle, the SPC electronics provides a better SNR because of the discrimination of the dark counts and because of the assignment of a specific delay to each photon. For very weak fluorescent samples and using long integration times, this capability could be advantageous. However, this advantage is not crucial if we consider that the frequency-domain method is intrinsically based on a lock-in method and that the dark noise is also very efficiently discriminated. For the microscope environment, the potential advantage of the TCSPC method is reduced by the relatively slow processing electronics. We have estimated that in common experiments, the rate of photon detection in the bright pixels in a typical image is well above the processing capability of the SPC electronics. This effect results in fewer photons detected, distortion of the image intensity, and overall increase of the noise. Our conclusion is that the TCSPC method is advantageous for weak samples and using very long integration times. For example, for single molecule studies it is the only possibility. In all other conditions, the analog frequency-domain method could result in an overall better SNR due to the smaller dead time and to the lack of saturation of the detection electronics.
7,834.4
2003-07-01T00:00:00.000
[ "Physics" ]
Sampling from Social Networks with Attributes Sampling from large networks represents a fundamental challenge for social network research. In this paper, we explore the sensitivity of different sampling techniques (node sampling, edge sampling, random walk sampling, and snowball sampling) on social networks with attributes. We consider the special case of networks (i) where we have one attribute with two values (e.g., male and female in the case of gender), (ii) where the size of the two groups is unequal (e.g., a male majority and a female minority), and (iii) where nodes with the same or different attribute value attract or repel each other (i.e., homophilic or heterophilic behavior). We evaluate the different sampling techniques with respect to conserving the position of nodes and the visibility of groups in such networks. Experiments are conducted both on synthetic and empirical social networks. Our results provide evidence that different network sampling techniques are highly sensitive with regard to capturing the expected centrality of nodes, and that their accuracy depends on relative group size differences and on the level of homophily that can be observed in the network. We conclude that uninformed sampling from social networks with attributes thus can significantly impair the ability of researchers to draw valid conclusions about the centrality of nodes and the visibility or invisibility of groups in social networks. INTRODUCTION Sampling from large networks represents a fundamental problem for social network research. In order to draw valid conclusions from network samples, understanding how accurately samples reflect the position of nodes in the original network is essential. Previous research has studied robustness of network samples from different angles, for example by examining the accuracy of network measures such as degree or betweenness centrality. A range of network properties has been found to be sensitive to the choice of sampling methods [4,6,11,13,15,16,18,30]. Motivation and problem. In this paper, we focus on the specific problem of sampling nodes and edges from a social network with attributes, i.e., a network where nodes are colored. For example, the color of nodes might be determined by gender, ethnicity, or age. We consider the special case of networks (i) where one binary attribute can be observed (e.g., a male and a female group of nodes), (ii) where the size of the two groups is unequal (e.g., a male majority and a female minority), and (iii) where nodes with the same or different attribute value attract or repel each other, i.e., homophilic [26] or heterophilic networks [3]. While the general impact of sampling on network characteristics has been studied thoroughly in the past [4,6,11,13,15,16,30], the role of attributes in combination with fundamental social mechanisms such as homophily [21,27] has only received little attention so far [19]. In fact little is known about whether or how different sampling techniques are able to conserve the ranking of nodes or the visibility of groups from the original network. Accurately capturing network characteristics of groups of nodes in sampled data, however, is crucial not only for researchers interested in directly studying these groups (e.g., gender or sociological studies), but also for researchers interested in analyzing the structure of the complete network since attributes of actors can impact the overall network structure [5,21,27]. Research questions. In this paper, we thus ask: How sensitive are different sampling techniques with respect to conserving the ranking of nodes and the visibility of groups in synthetic and empirical social networks with (i) different minority and majority group proportions, and (ii) various levels of homophily? Methods and materials. We evaluate different sampling techniques (node sampling, edge sampling, random walk sampling, and snowball sampling) with respect to reflecting the ranking of nodes and the visibility of groups in network samples (see Figure 1). Instead of putting the focus on the whole population as in previous work, we specifically focus on sub-populations (or groups); we call the larger group majority and the smaller group minority. Our work is guided by the intuition that an ideal sample would allow to accurately preserve the original degree centrality ranking of nodes, and therefore preserve the relative importance between nodes and groups. That means, an ideal sample would not systematically rank nodes of one group higher and nodes of the other group lower than expected. This would be considered a biased sample or sampling error. Figure 1: Illustration. This example shows a heterophilic and a homophilic network with a red minority and a blue majority group. We illustrate that sampling methods may differ in their ability to preserve the visibility of the minority group when ranking sampled nodes by their degree centrality. We construct synthetic social networks and vary the structural mechanisms guiding the growth of the network (i.e., homophily, preferential attachment, and group sizes), to study the extent to which they impact the accuracy of samples. We additionally showcase observed artifacts on empirical networks. Based on the obtained insights, we provide indicators of why samples might have issues with capturing expected group characteristics. Contributions and Findings. (i) We propose a method to measure the robustness of samples from networks with two attributes. (ii) Using synthetic and empirical networks, we provide evidence that different network sampling techniques have issues with capturing the expected centrality of nodes and the visibility of minority / majority groups in social networks. (iii) We discuss network characteristics that lead to observed discrepancies and quantify the impact of relative group size differences and homophily on sampling errors. BACKGROUND AND RELATED WORK Network analysis has long been plagued by issues of measurement error, usually in the form of missing data. Understanding the robustness of basic network measures is extremely important in order to assess the validity of network research. Prior research explored the impact of missing data on various network measures, but mainly focused on small sociometric networks [6,11], small bipartite collaboration graphs [15], and random networks [4,15]. Smith and Moody [28] extended this line of research and analyzed four classes of network measures on 12 relatively small (< 1000 nodes) empirical networks. They found that larger, more centralized networks, are in general more robust to missing nodes at random, especially for centrality and centralization measures. This is plausible since random node deletion in a centralized network (with skewed degree distribution) is less likely to remove hubs since few of them exist. In our work, we do not explore the effect of random node deletion, but, compare different sampling methods. Node sampling is the opposite of random node deletion, since the randomly selected nodes are included in the sample. Our results throughout this paper show that random node sampling from centralized networks (heterophilic networks with very popular minority) does not only fail to capture the centralization of the network well (since we miss the hubs), but also fails to accurately capture the relative importance of groups. Wang et al [30] presented the first work that explores the sensitivity of different network measures with respect to miss-ing data in two large online social networks and one random graph. They defined six different types of measurement errors (missing nodes, spurious nodes, missing edges, spurious edges, falsely aggregated nodes, and falsely disaggregated nodes) and simulated their effect on the complete network. Using Spearman rank correlation, the authors compared the list of nodes that is ranked based on the network measure in the original network with the one that is computed on the sample. The work finds support for Borgatti's findings [4], highlighting that different centrality measures are similarly robust to measurement errors. Interestingly, results show that more local network measures like clustering are more prone to missing data than more global measures such as centralities. Thus, the authors revised the general claim from past research that the more "global" a measure, the less resistant it is to measurement error. Lee et al. [17] analyzed scale-free networks and three empirical networks suggesting that network properties such as betweenness centrality or clustering are sensitive to the choice of sampling method. Lee and Pfeffer [16] explored the quality of sampling by comparing the node-level network scores induced from the sample and the original network. They used edge-sampling and focused on degree and betweenness centrality for two empirical communication networks. Their results show that larger samples lead to high sampling accuracy and that centralized graphs in which fewer nodes enjoy higher attention offer more accurate samples when edge sampling is used. Our work extends their work, since we compare various sampling techniques and introduce groups and homophily. Furthermore, Leskovec and Faloutsos [18] showed that network properties are sensitive to the choice of the sampling method. However, they assessed the quality of a sample by comparing the shape of the distribution of a network measure (e.g., degree) in the sample with the original one using the Kolmogorov Smirnov Distance. This evaluation criterion is very different from what has been used in previous work and what we use in this work, since it does not take the accuracy of the ranking of nodes into account. Most prior work shows that network estimates become more inaccurate with lower sample coverage, but there is a wide variability of these effects across different measures, network topologies and sampling errors. To our best knowledge, most previous work neglected the existence of heterogeneous attributes in networks and did not analyze the interplay between mechanisms that impact the topology of a social network and the accuracy of sampling techniques. A mention-able exception is the work by Li and Ye [19] who explored the ratio of intra-and inter-group links in samples drawn from a sample of the follow-network of Twitter users. Our work extends their work by systematically exploring the effect of group sizes and homophily on the visibility of individual nodes. Our work focuses on undirected networks, but the work by Huisman [13] provides a comparison of sample bias in directed and undirected versions of the same network. METHODS In this work, we are interested in studying the accuracy of samples drawn from networks with unequally sized groups and various levels of homophily. We (i) describe used sampling techniques and (ii) explain how we assess the accuracy of a sample. Sampling techniques Our goal is to sample K nodes from the overall set of N nodes in a network. As pointed out in [18], we can split sampling algorithms into three groups: methods based on randomly selecting nodes, randomly selecting edges, and exploration techniques simulating random walks or virus propagation to find a representative sample of nodes. We focus on one sampling technique from each group: Random node sampling. This is the most basic sampling technique where a random subset of K nodes is selected. The sampled network then contains these K nodes and all links between them. Random node sampling is e.g., used when a sample of individuals is first selected and then their contact behavior is observed. Numerous surveys and data collections use this method, e.g., measuring contact pattern among high school students using wearable sensors [20]. Random edge sampling. This strategy randomly samples edges from the network and filters the complete network by sampled edges. To be consistent with the other sampling strategies, we successively sample edges until K nodes are selected. The sampled network then contains these K nodes and sampled links, but not those links between selected nodes that have not been sampled. Random edge sampling is commonly used to construct a social graph by using information about contacts-e.g., phone calls are sampled and a graph of callers and receivers is constructed [12]. Snowball sampling. In snowball sampling, we randomly sample one starting node and add all its neighbors as well as the neighbors' neighbors to the set of sampled nodes-i.e., two step snowball sampling. We repeat this until we have gathered K nodes for the sample. If a full iteration does not catch K nodes, we repeat the process again with a new randomly selected starting node. The sampled network then contains these K nodes and all the links connecting them. Traditionally, snowball sampling is used when the population under study is not easily accessible (e.g., to study homeless people or illegal immigrants). Indeed, the promise of the snowball sampling is to access hard-to-reach population [1]. Random walk (RW) sampling. This strategy samples nodes by walking through the network. The walker starts at a random node in the network and chooses in each step one out-going link randomly and traverses it. All visited nodes are then added to the sample until K nodes have been added. A teleport probability can be set for teleporting to another random node in the network instead of traversing a link in this iteration; we use 0.15 throughout this work. The sampled network then contains these K nodes and all links between them. This technique of sampling is usually used in online social networks such as Facebook or Twitter, in which retrieving information about the whole population is overwhelming and computationally costly, but we can access and navigate the original network. Evaluation measures The ubiquity of sampled network data makes the understanding of the robustness of network measures crucial. Here, we focus on the most basic and widely used centrality measure: degree centrality [10]. The degree centrality of a node is defined as the fraction of nodes it is connected to. Previous work explored the robustness of centrality measures in samples of networks without taking heterogeneous attributes of nodes into account. Therefore, simple rank correlation (see e.g., [6,16,28,30]) and overlap measures (see e.g., [4]) have been used to assess how well a sample captures the ranking of nodes according to various network measures. In this work, we are interested in assessing how well a sample captures, on average, the overall position of nodes in the original network for each group of nodes separately. That means, we aim to reveal if the positions of nodes in both groups are equally well captured in a way that the relative group and node importance are preserved. If we would compute the overall rank correlation (or overlap) between the two lists and ignore the group memberships, then the ranking of majority nodes would contribute more to the correlation coefficient (or overlap). A naive group-specific measure would be to compute a separate rank correlation (or overlap) for each group. However, this measure would only allow us to assess how well the relative importance of nodes within each group in the original network is preserved in the sample, but the relation between nodes across groups would be neglected. Therefore, simple rank correlation or overlap measures cannot be used to assess whether the relevance of nodes and groups is accurately captured in a sample. In this work we define an ideal sample as a sample that allows to accurately reconstruct the original degree centrality ranking of nodes and therefore preserves the relative importance between nodes and groups. That means, an ideal sample does not systematically rank nodes of one group higher and nodes of the other group lower than expected. To assess the accuracy of the relative importance of nodes and groups, we propose the following two evaluation measures. Both evaluation measures focus on the top k or top k percent of the data, since (i) users focus on the first few results in ranked lists and (ii) the distribution of degree centralities are usually heavy tail distributions. Therefore, the contribution of disorders in the long tail (unpopular nodes) would dominate disorders in the head (popular nodes) if we would not limit our analysis to the head [32]. Top k bias. To assess the accuracy of group visibility in a sample, we compare the fraction of minority nodes in the top k nodes of a sample with its fraction in the top k nodes of the complete network. Observed topk refers to the fraction of minority nodes that we observe in the top k nodes of the sample, while expected topk refers to the fraction of minority nodes in the top k nodes of the original network. As sample size grows, the observed fraction in the sample approaches the expected fraction. Figure 2: Degree distribution of synthetic networks. The average degree distribution of majority (80% of nodes) and minority (20% of nodes) in a synthetically generated preferential attachment network with various levels of homophily. One can see that the degree distributions are almost equal if homophily does not play a role (h = 0.5). In heterophilic networks (h < 0.5) the group-specific differences are much more pronounced than in homophilic networks (h > 0.5). The top k ratio is a binary measure that does not take the importance of individual nodes into account. That means, we cannot measure how much lower the ranking of a node is in the sample compared to its ranking in the complete network. To overcome this limitation, we first compute the relevance for each node i by ranking nodes based on their centrality in the original network. The relevance of node i is defined as the inverse rank that belongs to node i normalized by the rank sum of all nodes (N ) in the original network: The relevance shrinks linearly with the position of nodes in the list, but different weighting is possible. We compute for each group g its cumulative group relevance (CGR) at rank k in the original ranked list and compare it with the cumulative relevance at rank k in the sample: The nCGR topk measures the extent to which the relevance of a group in the sample is above or below what we would expect from the original network with respect to the top k nodes. If e.g., this normalized cumulative group relevance for the minority is 2, then that means that the minority is twice as relevant in the sample than in the original network (for some top k). If it is 0.5 then the group is half as relevant in the sample than in the original network. If it is 1 then the group has equal relevance in the original network and the sample. We analyze the log of the normalized cumulative relevance since otherwise the measure is bound by zero; thus, the ideal nCGR is zero. To avoid division by zero and logarithm of zero, we add a small = 0.001. SIMULATION EXPERIMENTS We construct synthetic networks and explore the effect of homophily and group size on the accuracy of samples in a controlled environment. First, we describe the network model which we use to create synthetic network data and second, we discuss the accuracy of centrality measures in samples drawn from these networks using different sampling methods. Synthetic network generators Preferential attachment (the tendency of nodes to connect to popular nodes) [2,33] and homophily (the tendency of nodes to connect to similar nodes) [21,27] have been extensively observed in many real-world social networks [7,9,23,31] and information networks [22,24]. Homophily implies the existence of at least one fixed or mutable attribute (e.g., gender, ethnicity, education status). Based on these attributes similarities between nodes can be defined. We use an existing preferential attachment growth model with a homophily parameter that can be tuned and thus allows us to create networks with different levels of homophily and heterophily (see [8,14] for details). The homophily parameter h ranges between 0 to 1, h ∈ [0, 1], where 0 means that nodes are only attracted by nodes that are dissimilar to them (heterophily), 1 means nodes prefer to connect with similar nodes (homophily), and 0.5 means that the link formation behavior is not driven by attributes. All nodes of the same group share the same homophily parameter h, because they share the same attribute value and thus have the same distance to other groups with different attribute values. We generate all synthetic networks with 10, 000 nodes and a fixed minority ratio of 20% (except when noted otherwise). An incoming node connects to 10 nodes based on a specific homophily parameter and popularity (see [14]). Figure 2 shows the degree distribution of both groups of nodes in networks that only vary in their degree of homophily. One can see that if we have two groups of unequal size and the network is heterophilic (h < 0.5), the degree distributions of majority and minority differ the most. In fact, the fraction of high degree nodes that are part of the minority is much higher than it is for majority nodes. This is not surprising since the majority is attracted by the minority which therefore becomes an elite of powerful nodes in the network. If the group membership does not play a role (h = 0.5), the degree distributions of both groups are almost identical because only degree impacts the formation of edges and degree is equally distributed across groups. Also if the two groups are separated (h = 1.0), the degree distributions are similar because both groups grow similarly and do not compete. The popularity of a group is bound by its size and therefore nodes with the highest degree are majority nodes. If we compare the degree distribution of the two groups in a moderate heterophilic network with h = 0.25 and a moderate homophilic network with h = 0.75, we see that the differences between the degree distributions are more pronounced in the heterophilic case. This asymmetric effect can be explained by the interplay between group size differences and homophily. The majority benefits from moderate homophily (e.g. h = 0.75) more than from high homophily (e.g., h = 0.9), because in high homophily conditions, their maximum degree is bound to the size of their group, while in moderate homophily conditions, sometimes also minority nodes will be attracted by the high degree of majority nodes. Unlike the majority in the homophilic case, the minority in the heterophilic case benefits more from extreme heterophily (i.e., h = 0.0) than from moderate heterophily (e.g. h = 0.25). That is because in the extreme heterophily condition, all majority nodes are attracted by the minority, but in moderate heterophily condition sometimes the majority is attracted by high degree nodes which can also be part of their group. So for the minority to gain popularity, it is better if they do not have to compete with the majority while the majority benefits from a competitive environment. In the next section, we will analyze how these group-specific differences in the degree distributions relate to sample biases. Sample bias in synthetic networks To assess sample bias, we generate synthetic networks, draw samples of varying size from them using different sampling techniques and assess the average visibility and relevance of different groups in samples. We repeat the random network generation process 10 times and draw 10 samples from each network; thus, in our evaluation, we report mean and standard error over 100 samples. Figure 3 shows the visibility of the minority group in the top 100 nodes in samples of different size which have been created via different sampling methods. For example, in Figure 3 (a), the point for the green line at an x-value of 0.10 indicates that the top 100 ranked nodes based on degree centrality in a 10% sample from a moderate heterophilic network with h = 0.25, contains on average around 40% minority nodes. We can compare this observed percentage with the expected percentage from the original network (100% sample). In this case, we would expect to see close to 80% of minority nodes in the top 100 nodes indicating that the minority is underrepresented in small samples drawn from moderate heterophilic networks with unbalanced group sizes using node sampling. Results show that especially node and snowball sampling reduce the visibility of minority groups in the top k list if samples are drawn from extreme and moderate heterophilic networks. For node sampling, this is not surprising since all nodes have equal probability to be picked and therefore, a node's sampling probability is proportional to its group size. Snowball samples aggregate the 2-hop neighbourhood of randomly selected seed nodes which likely are majority nodes. Since most majority nodes are unpopular (skewed degree distribution), the probability for picking a majority node that has only a few minority nodes as neighbours is high. Thus, we underestimate the visibility of the minority group in the top k. Figure 4 shows that in the heterophilic network, the bias of node and snowball samples decreases linearly with decreasing group size difference. Note that group sizes are balanced if the minority ratio is 0.5. We further find that RW samples are very robust against relative size differences between groups in homophilic and heterophilic networks. In Figure 5 we show to what extent the original relevance of each group is preserved in the sample. We find that in most cases the relevance of the minority is underestimated. Only in moderate homophilic networks, minority is overrepresented. However, one needs to note that the extent with which the relevancy of the minority is overestimated in moderate homophilic networks (h = 0.75, 4th row) is lower than the extent with which the relevancy of the majority is overestimated in moderate heterophilic networks (h = 0.25, 2nd row). Overall, we see that (i) the most accurate samples can be drawn from networks where homophily does not play a role, (ii) RW sampling performs best independent of the homophily conditions (see Figure 3 and 5) and relative group size differences (see Figure 4), (iii) all sampling methods perform similar if group size differences are small, and (iv) Figure 4: Relative group size differences. The y-axis shows the relevance of the minority group in the top 100 nodes of the sample network compared to the original network. The x-axis shows the relative size of the minority group. The sample size is 10% of the original network. One can see that in samples drawn from heterophilic networks, the relevance of the minority is always underestimated; especially node and snowball sampling fail when group size differences are large in heterophilic networks. In homophilic networks the relevance of the minority is overestimated if the fraction of the minority group is very low. Node and edge sampling produce the most biased samples in this condition. Overall, we see that the more balanced the group sizes (0.5 means that 50% of the nodes belong to minority) are, the more accurate the sample and the more similar the performance of different sampling techniques are. RW sampling performs best in all conditions and sampling errors are always higher in heterophilic networks than in homophilic ones. the sampling error is always higher in heterophilic networks than in homophilic networks if the same sampling technique and group size differences are considered. Regression analysis. To compare the impact of different factors on the sampling bias, we fit eight simple linear regression models, one model for each sampling technique and each error measure (top k minority bias bias topk and the absolute sum of the normalized cumulative group relevance nCGR topk of the minority and the majority group). Each model was fitted to 3,200 observations (samples drawn from synthetically generated networks). Table 1 shows that across all sampling methods-perhaps not surprisingly-smaller samples lead to higher sampling errors and larger top k lists lead to higher errors because the size of the network is constant. Interestingly, we see that only for node and snowball samples, the sampling error increases, if group size differences and the influence of the attribute on the edge formation behavior (i.e., the homophily parameter is closer to 0 or 1) increase. If only one of these factors changes, no significant effects on the sampling error can be observed, except for snowball samples. The bias of snowball samples also increases significantly if only homophily increases, because in extreme homophilic networks a snowball sample can only contain nodes of one group also if groups are of equal size. One can see that the sampling error of RW and edge samples cannot be explained by group size differences and homophily, which confirms our observation that these methods are rather robust against these factors. EMPIRICAL EXPERIMENTS Next, we analyse two empirical networks and explore the accuracy of samples drawn from these networks. We describe the statistical properties of these networks and contrast empirical findings with the findings obtained from simulation. Pokec social network Dataset. We study publicly available data 1 obtained from the most popular Slovakian social network "Pokec" [29]. We added all friendship relations as undirected edges. The network contains 1, 632, 640 nodes (users) and 22, 301, 602 edges (friendship relations). The average degree of nodes is 27.32, the global clustering coefficient is 0.0069, and the graph diameter is 14. For our experiments, we focus on the age of actors in the social network. Eliminating all nodes without age information results in a network with 1, 138, 314 nodes connected by 14, 975, 771 edges. For coloring nodes as minority and majority, we take the 80% percentile of the overall age distribution, and color all nodes with an age higher than this percentile as belonging to the minority (old users), and all below as belonging to the majority. This results in an age cut-off of 31 years, meaning that the minority-18.8% of all nodes-captures the oldest users in the network. Overall, around 92% of all edges in the network are between nodes of the same color-i.e., between two minority or two majority nodes. This exceeds the expectation of around 81.3% if edges would form totally at random. From that we can assert that the Pokec social network is moderately homophilic with respect to the defined age groups. Figure 6 shows the degree distribution of young and old users. One can see that the most popular users are part of the majority. Results. Figure 7 shows that the visibility of the minority and the relevance of both groups is very well preserved in all samples. This is in line with what our model suggests for very homophilic networks (see Figure 7). Interestingly, random walk sampling produces the most accurate sample, which is also suggested by our model, especially for large relative groups size differences (see Figure 4(c)). Sexual contact network Dataset. We use a network of claimed sexual contacts between Brazilian escorts (prostitutes) and sex buyers [25]. 1 https://snap.stanford.edu/data/soc-pokec.html Figure 5: Normalized Cumulative Group Relevance. Each column depicts a different sampling technique, while each row refers to a different world for which the homophily level of the original network varies. The axis are aligned within each row, but not within each column, since the extent of error varies depending on the world. Again, each point refers to an average evaluation over 100 total iterations. One can see that in extreme heterophilic networks (first row) the relevance of the majority is overestimated in small and also in larger sized samples, while the relevance of the minority is slightly underestimated especially in small sized samples. In extreme homophilic networks (last row), it is the other way around, however the extent to which the relevance of the minority is overestimated is smaller than the extent to which the relevance of the majority is overestimated in the extreme heterophilic case. Overall, random walk sampling produces the most accurate samples, followed by edge sampling. In samples based on node and snowball sampling, the relevance of the minority is usually underestimated, except in moderate homophilic networks (4th row). Figure 6: Degree distribution of empirical social networks (Pokec and Sexworker). In the homophilic Pokec social network, nodes with the highest degree tend to belong to the majority (young users). For the heterophilic sexworker network, the most popular nodes belong to the minority (women) since the majority (men) is attracted by the minority and the other way around. The network consists of 16, 730 nodes (6,624 sex workers and 10,106 sex buyers) and 50, 632 edges between them. The minority of nodes with a share of around 40% are sex workers, while the majority are sex buyers. The network is fully bi-partite, meaning that sex workers only connect with sex buyers to capture sexual contacts. Consequently, all edges within the networks are between nodes of different color and thus, the network is 100% heterophilic. The degree distributions of minorities and majorities show that minorities are more popular than majorities (see Figure 6). This is not surprising because the network is an example of an extreme heterophilic network since the majority nodes are attracted by the minority nodes and the other way around. Results. Figure 7 shows that the minority (escorts) are very visible in the top 100 nodes ranked by degree centrality also in samples of small size. Node-based samples are the most inaccurate samples, since they underestimate the visibility and relevance of the minority most. Edge-based samples capture the visibility of the minority in the original network best if the original network is extremely heterophilic. Our model suggests that no large differences in the performance of different sampling techniques (as suggested by Figure 4) will exist because group size differences are rather small (40:60); but, edge and RW sampling will produce more accurate samples than node and snowball sampling. Further, we can expect that all samples will underestimate the relevance of the minority. These expectations are confirmed empirically (cf. Figure 7, bottom row). DISCUSSION If homophily (or heterophily) is the driving force behind the formation of edges in social networks with unbalanced attribute distributions, then the attribute and the degree of nodes become statistical dependent, i.e., P (attribute|degree) = P (attribute)P (degree) and P (degree|attribute) = P (degree)P (attribute). Our work shows that if a statistical dependency between the network structure and the attribute of interest exists, all sampling methods introduce bias w.r.t. capturing the importance of nodes compared to when no relationship exists. However, not all sampling techniques are equally prone to group size differences and attribute influence on edge formation behavior which lead to statistical dependency between the network structure and the attribute of interest. While sampling errors in node and snowball samples clearly increase if group size differences and attribute influence are increased, random walk and edge sampling are more robust against these factors. This can be explained by the fact that e.g., random walk and edge sampling favor high degree nodes and aim to preserve the degree distribution of nodes. Therefore, systematic differences in the degree of nodes in different groups can, to some extent, be captured. The sampling error in snowball samples also increases, if only the influence of attributes on the edge selection behavior increases (see Table 1). This indicates, that even if group sizes are balanced, homophily or heterophily may cause problems in snowball samples. Interestingly, the overestimation of the importance of a majority in heterophilic networks is more pronounced than the overestimation of the importance of minorities in homophilic networks. This can be explained by an asymmetry in the differences in degree distributions. In heterophilic networks, the difference between minority and majority degree distributions is larger than in a comparable homophilic network (same group sizes and similar impact of group membership on formation of edges). Our observations from two real-world social networks confirm our simulation results and show that in heterophilic networks, the relevance of majority nodes is Table 1: Coefficients of eight linear regression models, one for each sampling technique and sampling error measure. Each model was fitted to 3,200 observations (samples drawn from synthetically generated networks). The interaction term between group size difference and attribute influence is significant in node and snowball samples, but not in RW and edge samples. This indicates, the sampling error increases in node and snowball samples if the group size difference and the influence of attributes on the edge formation behavior are both increased. Edge and RW samples are rather robust against these factors. We compute the sampling error for lists of different length k and control for the effect of k in the model. The larger k, the higher the error. We also observe on average larger sampling errors on smaller sample sizes. Note: * * p < 0.01; * * * p < 0.001. Only for small top k, the minority is slightly more visible than expected if samples are generated via node, snowball or edge sampling. In samples drawn from the Sexworker network, we see that the minority (escorts) is very visible in the top 100 nodes ranked by degree centrality. Edge-based samples capture visibility of the minority in the original network best. The relevance of the majority is overestimated as also suggested by our model. Edge sampling produces the most accurate samples. overestimated while in homophilic networks, it is slightly underestimated. One limitation of our network generation model is that we limit it to two groups and that it assumes that all nodes in a group are equally active and behave equally homophilic or heterophilic. In real world social networks, more groups and group-specific and individual behavioral differences can be present. Future research is necessary to study the effect of group-specific activity difference and asymmetric homophilic behavior and needs to explore the presence of multiple groups. Furthermore, we focus on one specific network measure and undirected networks warranting further explorations about the accuracy of various network measures in samples drawn from directed networks. Our work can be extended to more than one binary attribute by simply defining a similarity function that takes several attributes into account. CONCLUSIONS In summary, our work shows that the combination of two factors leads to sampling error in social networks with attributes: (i) group size differences and (ii) homophily. If unequal sized groups are present, random walk sampling always leads to the most accurate samples-independent of the level of homophily. The sampling error is always larger if samples are drawn from heterophilic networks with unequally sized groups compared to homophilic ones. In heterophilic networks with unbalanced groups, random walk and edge sampling perform similar well, while in homophilic networks edge sampling produces more biased samples than random walk sampling. This can be explained by the fact that in homophilic networks edge sampling overestimates the importance of minority nodes, since minority nodes with high degree are more likely to be selected. Edge samples only include sampled edges, but not all other edges between selected node. Therefore, the difference in degree between minority and majority nodes can be skewed. Most sampling techniques produce accurate samples if the groups are of equal size. Only snowball samples can also be biased if homophily is a driving force behind the edge formation of nodes that belong to two equally sized groups. Since researchers often do not have information about group size differences and homophily in the original network, random walk sampling is a robust choice. However, researchers cannot always choose their sampling method freely. Therefore, our results provide important guidance in estimating which groups will be over-or underestimated in samples drawn from social networks with unequally sized groups and various level of homophily. It is our hope that the research presented in this paper motivates more research into sampling from social networks with attributes.
9,052.4
2017-02-17T00:00:00.000
[ "Computer Science", "Sociology" ]
The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English For machine translation, a vast majority of language pairs in the world are considered low-resource because they have little parallel data available. Besides the technical challenges of learning with limited supervision, it is difficult to evaluate methods trained on low-resource language pairs because of the lack of freely and publicly available benchmarks. In this work, we introduce the FLORES evaluation datasets for Nepali–English and Sinhala– English, based on sentences translated from Wikipedia. Compared to English, these are languages with very different morphology and syntax, for which little out-of-domain parallel data is available and for which relatively large amounts of monolingual data are freely available. We describe our process to collect and cross-check the quality of translations, and we report baseline performance using several learning settings: fully supervised, weakly supervised, semi-supervised, and fully unsupervised. Our experiments demonstrate that current state-of-the-art methods perform rather poorly on this benchmark, posing a challenge to the research community working on low-resource MT. Data and code to reproduce our experiments are available at https://github.com/facebookresearch/flores. For machine translation, a vast majority of language pairs in the world are considered low-resource because they have little parallel data available. Besides the technical challenges of learning with limited supervision, it is difficult to evaluate methods trained on lowresource language pairs because of the lack of freely and publicly available benchmarks. In this work, we introduce the FLORES evaluation datasets for Nepali-English and Sinhala-English, based on sentences translated from Wikipedia. Compared to English, these are languages with very different morphology and syntax, for which little out-of-domain parallel data is available and for which relatively large amounts of monolingual data are freely available. We describe our process to collect and cross-check the quality of translations, and we report baseline performance using several learning settings: fully supervised, weakly supervised, semi-supervised, and fully unsupervised. Our experiments demonstrate that current state-of-the-art methods perform rather poorly on this benchmark, posing a challenge to the research community working on lowresource MT. Data and code to reproduce our experiments are available at https://github. com/facebookresearch/flores. Introduction Research in Machine Translation (MT) has seen significant advances in recent years thanks to improvements in modeling, and in particular neural models (Sutskever et al., 2014;Bahdanau et al., 2015;Gehring et al., 2016;Vaswani et al., 2017), as well as the availability of large parallel corpora for training (Tiedemann, 2012;Smith et al., ♥ Equal contribution. 2013; Bojar et al., 2017). Indeed, modern neural MT systems can achieve near human-level translation performance on language pairs for which sufficient parallel training resources exist (e.g., Chinese-English translation (Hassan et al., 2018) and English-French translation (Gehring et al., 2016;Ott et al., 2018a). Unfortunately, MT systems, and in particular neural models, perform poorly on low-resource language pairs, for which parallel training data is scarce (Koehn and Knowles, 2017). Improving translation performance on low-resource language pairs could be very impactful considering that these languages are spoken by a large fraction of the world population. Technically, there are several challenges to solve in order to improve translation for lowresource languages. First, in face of the scarcity of clean parallel data, MT systems should be able to use any source of data available, namely monolingual resources, noisy comparable data, as well as parallel data in related languages. Second, we need reliable public evaluation benchmarks to track progress in translation quality. Building evaluation sets on low-resource languages is both expensive and time-consuming because the pool of professional translators is limited, as there are few fluent bilingual speakers for these languages. Moreover, the quality of professional translations for low-resource languages is not on par with that of high-resource languages, given that the quality assurance processes for the low-resource languages are often lacking or under development. Also, it is difficult to verify the quality of the human translations as an non-native speaker, because the topics of the documents in these low-resource languages may require knowl-edge and context coming from the local culture. In this work, we introduce new evaluation benchmarks on two very low-resource language pairs: Nepali-English and Sinhala-English. Sentences were extracted from Wikipedia articles in each language and translated by professional translators. The datasets we release to the community are composed of a tune set of 2559 and 2898 sentences, a development set of 2835 and 2766 sentences, and a test set of 2924 and 2905 sentences for Nepali-English and Sinhala-English respectively. In §3, we describe the methodology we used to collect the data as well as to check the quality of translations. The experiments reported in §4 demonstrate that these benchmarks are very challenging for current state-of-the-art methods, yielding very low BLEU scores (Papineni et al., 2002) even using all available parallel data as well as monolingual data or Paracrawl 1 filtered data. This suggests that these languages and evaluation benchmarks can constitute a useful test-bed for developing and comparing MT systems for lowresource language pairs. Related Work There is ample literature on low-resource MT. From the modeling side, one possibility is to design methods that make more effective use of monolingual data. This is a research avenue that has seen a recent surge of interest, starting with semisupervised methods relying on backtranslation (Sennrich et al., 2015), integration of a language model into the decoder (Gulcehre et al., 2017;Stahlberg et al., 2018) all the way to fully unsupervised approaches (Lample et al., 2018b;, which use monolingual data both for learning good language models and for fantasizing parallel data. Another avenue of research has been to extend the traditional supervised learning setting to a weakly supervised one, whereby the original training set is augmented with parallel sentences mined from noisy comparable corpora like Paracrawl. In addition to the challenge of learning with limited supervision, low-resource language pairs often involve distant languages that do not share the same alphabet, or have very different morphology and syntax; accordingly, recent work has begun to explore language-independent lexical representations to improve transfer learning (Gu et al., 2018). In terms of low-resource datasets, DARPA programs like LORELEI (Strassel and Tracey, 2016) have collected translations on several lowresource languages like English-Tagalog. Unfortunately, the data is only made available to the program's participants. More recently, the Asian Language Treebank project (Riza et al., 2016) has introduced parallel datasets for several low-resource language pairs, but these are sampled from text originating in English and thus may not generalize to text sampled from low-resource languages. In the past, there has been work on extracting high quality translations from crowd-sourced workers using automatic methods (Zaidan and Callison-Burch, 2011;Post et al., 2012). However, crowd-sourced translations have generally lower quality than professional translations. In contrast, in this work we explore the quality checks that are required to filter professional translations of lowresource languages in order to build a high quality benchmark set. In practice, there are very few publicly available datasets for low-resource language pairs, and often times, researchers simulate learning on lowresource languages by using a high-resource language pair like English-French, and merely limiting how much labeled data they use for training (Johnson et al., 2016;Lample et al., 2018a). While this practice enables a framework for easy comparison of different approaches, the real practical implications deriving from these methods can be unclear. For instance, low-resource languages are often distant and often times corresponding corpora are not comparable, conditions which are far from the simulation with high-resource European languages, as has been recently pointed out by Neubig and Hu (2018). Methodology & Resulting Datasets For the construction of our benchmark sets we chose to translate between Nepali and Sinhala into and out of English. Both Nepali and Sinhala are Indo-Aryan languages with a subject-object-verb (SOV) structure. Nepali is similar to Hindi in its structure, while Sinhala is characterized by extensive omissions of arguments in a sentence. Nepali is spoken by about 20 million people if we consider only Nepal, while Sinhala is spo-ken by about 17 million people just in Sri Lanka 2 . Sinhala and Nepali have very little publicly available parallel data . For instance, most of the parallel corpora for Nepali-English originate from GNOME and Ubuntu handbooks, and account for about 500K sentence pairs. 3 For Sinhala-English, there are an additional 600K sentence pairs automatically aligned from from OpenSubtitles (Lison et al., 2018). Overall, the domains and quantity of the existing parallel data are very limited. However, both languages have a rather large amount of monolingual data publicly available (Buck et al., 2014), making them perfect candidates to track performance on unsupervised and semi-supervised tasks for Machine Translation. Document selection To build the evaluation sets, we selected and professionally translated sentences originating from Wikipedia articles in English, Nepali and Sinhala from a Wikipedia snapshot of early May 2018. To select sentences for translation, we first selected the top 25 documents that contain the largest number of candidate sentences in each source language. To this end, we defined candidate sentences 4 as: (i) being in the intended source language according to a language-id classifier (Bojanowski et al., 2017) 5 , and (ii) having sentences between 50 and 150 characters. Moreover, we considered sentences and documents to be inadequate for translation when they contained large portions of untranslatable content such as lists of entities 6 . To avoid such lists we used the following rules: (i) for English, sentences have to start with an uppercase letter and end with a period; (ii) for Nepali and Sinhala, sentences should not contain symbols such as bullet points, repeated dashes, repeated periods or ASCII characters. The document set, along with the categories of documents 2 See https://www.ethnologue.com/language/npi and https://www.ethnologue.com/language/sin. 3 Nepali has also 4K sentences translated from English Penn Tree Bank at http:// www.cle.org.pk/software/ling_resources/ UrduNepaliEnglishParallelCorpus.htm, which is valuable parallel data. 4 We first used HTML markup to split document text into paragraphs. We then used regular expressions to split on punctuation, e.g. full-stop, poorna virama (\u0964) and exclamation marks. 5 This is a necessary step as many sentences in foreign language Wikipedias may be in English or other languages. 6 For example, the Academy Awards page: https://en.wikipedia.org/wiki/Academy_Award_ for_Best_Supporting_Actor. is presented in the Appendix, Table 8. After the document selection process, we randomly sampled 2,500 sentences for each language. From English, we translated into Nepali and Sinhala, while from Sinhala and Nepali, we only translated into English. We requested each string to be translated twice by different translators. Quality checks Translating domain-specialized content such as Wikipedia articles from and to low-resource languages is challenging: the pool of available translators is limited, there is limited context available to each translator when translating one string at a time, and some of the sentences can contain codeswitching (e.g. text about Buddhism in Nepali or Sinhala can contain Sanskrit or Pali words). As a result, we observed large variations in the level of translation quality, which motivated us to enact a series of automatic and manual checks to filter out poor translations. We first used automatic methods to filter out poor translations and sent them for rework. Once the reworked translations were received, we sent all translations (original or reworked) that passed the automatic checks to human quality checks. Translations which failed human checks, were disregarded. Only the translations that passed all checks were added to the evaluation benchmark, although some source sentences may have less than two translations. Below, we describe the automatic and manual quality checks that we applied to the datasets. Automatic Filtering. The guiding principles underlying our choice of automatic filters are: (i) translations should be fluent (Zaidan and Callison-Burch, 2011), (ii) they should be sufficiently different from the source text, (iii) translations should be similar to each other, yet not equal; and (iv) translations should not be transliterations. In order to identify the vast majority of translation issues we filtered by: (i) applying a count-based n-gram language model trained on Wikipedia monolingual data and removing translations that have perplexity above 3000.0 (English translations only), (ii) removing translations that have sentence-level char-BLEU score between the two generated translations below 15 (indicating disparate translations) or above 90 (indicating suspiciously similar translations), (iii) removing sen-tences that contain at least 33% transliterated words, (iv) removing translations where at least 50% of words are copied from the source sentence, and (v) removing translations that contain more than 50% out-of-vocabulary words or more than 5 total out-of-vocabulary words in the sentences (English translations only). For this, the vocabulary was calculated on the monolingual English Wikipedia described in Table 2. Manual Filtering. We followed a setup similar to direct assessment (Graham et al., 2013). We asked three different raters to rate sentences from 0-100 according to the perceived translation quality. In our guidelines, the 0-10 range represents a translation that is completely incorrect and inaccurate, the 70-90 range represents a translation that closely preserves the semantics of the source sentence, while the 90-100 range represents a perfect translation. To ensure rating consistency, we rejected any evaluation set in which the range of scores among the three reviewers was above 30 points, and requested a fourth rater to break ties, by replacing the most diverging translation rating with the new one. For each translation, we took the average score over all raters and rejected translations whose scores were below 70. To ensure that the translations were as fluent as possible, we also designed an Amazon Mechanical Turk (AMT) monolingual task to judge the fluency of English translations. Regardless of content preservation, translations that are not fluent in the target language should be disregarded. For this task, we then asked five independent human annotators to rate the fluency of each English translation from 1 (bad) to 5 (excellent), and retained only those above 3.Additional statistics of automatic and manual filtering stages can be found in Appendix. Resulting Datasets We built three evaluation sets for each language pair using the data that passed our automatic and manual quality checks: dev (tune), devtest (validation) and test (test). The tune set is used for hyperparameter tuning and model selection, the validation set is used to measure generalization during development, while the test set is used for the final blind evaluation. To measure performance in both directions (e.g. Sinhala-English and English-Sinhala), we built test sets with mixed original-translationese (Ba- roni and Bernardini, 2005) on the source side. To reduce the effect of the source language on the quality of the resulting evaluation benchmark, direct and reverse translations were mixed at an approximate 50-50 ratio for the devtest and test sets. On the other hand, the dev set was composed of the remainder of the available translations, which were not guaranteed to be balanced. Before selection, the sentences were grouped by document, to minimize the number of documents per evaluation set. In Table 1 we present the statistics of the resulting sets. For Sinhala-English, the test set is composed of 850 sentences originally in English, and 850 originally in Sinhala. We have approximately 1.7 translations per sentence. This yielded 1,465 sentence pairs originally in English, and 1,440 originally in Sinhalese, for a total of 2,905 sentences. Similarly, for Nepali-English, the test set is composed of 850 sentences originally in English, and 850 originally in Nepali. This yielded 1,462 sentence pairs originally in English and 1,462 originally in Nepali, for a total of 2,924 sentence pairs. The composition of the rest of the sets can be found in Table 1. In Appendix Table 6, we present the aggregate distribution of topics per sentence for the datasets in Nepali-English and Sinhala-English, which shows a diverse representation of topics ranging from General (e.g. documents about tires, shoes and insurance), History (e.g. documents about history of the radar, the Titanic, etc.) to Law and Sports. This richness of topics increases the difficulty of the set, as it requires models that are rather domain-independent. The full list of documents and topics is also in Appendix, Table 8. Experiments In this section, we first describe the data used for training the models, we then discuss the learning settings and models considered, and finally we report the results of these baseline models on the new evaluation benchmarks. Training Data Small amounts of parallel data are available for Sinhala-English and Nepali-English. Statistics can be found in Table 2. This data comes from different sources. Open Subtitles and GNOME/KDE/Ubuntu come from the OPUS repository 7 . Global Voices is an updated version (2018q4) of a data set originally created for the CASMACAT project 8 . Bible translations come from the bible-corpus 9 . The Paracrawl corpus comes from the Paracrawl project 10 . The filtered version (Clean Paracrawl) was generated using the LASER model (Artetxe and Schwenk, 2018) to get the best sentence pairs having 1 million English tokens as specified in Chaudhary et al. (2019). We also contrast this filtered version with a randomly filtered version (Random Paracrawl) with the same number of English tokens. Finally, our multilingual experiments in Nepali use Hindi monolingual (about 5 million sentences) and English-Hindi parallel data (about 1.5 million parallel sentences) from the IIT Bombay corpus 11 . Training Settings We evaluate models in four training settings. First, we consider a fully supervised training setting using the parallel data listed in Table 2. Second, we consider a fully unsupervised setting, whereby only monolingual data on both the source and target side are used to train the model (Lample et al., 2018b). Third, we consider a semi-supervised setting where we also leverage monolingual data on the target side using the standard back-translation training protocol (Sennrich et al., 2015): we train a backward MT system, which we use to translate monolingual target sentences to the source language. Then, we merge the resulting pairs of noisy (back-translated) source sentences with the original target sentences and add them as additional parallel data for training source-to-target MT system. Since monolingual data is available for both languages, we train backward MT systems in both directions and repeat the back-translation process iteratively (He et al., 2016;Lample et al., 2018a). We consider up to two back-translation iterations. At each iteration we generate back-translations using beam search, which has been shown to perform well in low-resource settings (Edunov et al., 2018); we use a beam width of 5 and individually tune the length-penalty on the dev set. Finally, we consider a weakly supervised setting by using a baseline system to filter out Paracrawl data using LASER (Artetxe and Schwenk, 2018) by following the approach similar to Chaudhary et al. (2019), in order to augment the original training set with a possibly larger but noisier set of parallel sentences. For Nepali only, we also consider training using Hindi data, both in a joint supervised and semi-supervised setting. For instance, at each iteration of the joint semi-supervised setting, we use models from the previous iteration to backtranslate English monolingual data into both Hindi and Nepali, and from Hindi and Nepali monolingual data into English. We then concatenate actual parallel data and back-translated data of the same language pair together, and train a new model. We also consider using English-Hindi data in the unsupervised scenario. In that setting, a model is pretrained in an unsupervised way with English, Hindi and Nepali monolingual data using the unsupervised approach by Lample and Conneau (2019), and it is then jointly trained on both the Nepali-English unsupervised learning task and the Hindi-English supervised task (in both directions). Models & Architectures We consider both phrase-based statistical machine translation (PBSMT) and neural machine translation (NMT) systems in our experiments. All hyper-parameters have been cross-validated using the dev set. The PBSMT systems use Moses (Koehn et al., 2007), with state-of-theart settings (5-gram language model, hierarchical lexicalized reordering model, operation sequence model) but no additional monolingual data to train the language model. The NMT systems use the Transformer (Vaswani et al., 2017) implementation in the Fairseq toolkit (Ott et al., 2019); preliminary experiments showed these to perform better than LSTM-based NMT models. More specifically, in the supervised setting, we use a Transformer architecture with 5 encoder and 5 decoder layers, where the number of attention heads, embedding dimension and inner-layer dimension are 2, 512 and 2048, respectively. In the semi-supervised setting, where we augment our small parallel training data with millions of back-translated sentence pairs, we use a larger Transformer architecture with 6 encoder and 6 decoder layers, where the number of attention heads, embedding dimension and inner-layer dimension are 8, 512 and 4096, respectively. When we use multilingual data, the encoder is shared in the {Hindi, Nepali}-English direction, and the decoder is shared in the English-{Hindi, Nepali}direction. We regularize our models with dropout, label smoothing and weight decay, with the corresponding hyper-parameters tuned independently for each language pair. Models are optimized with Adam (Kingma and Ba, 2015) using β 1 = 0.9, β 2 = 0.98, and = 1e − 8. We use the same learning rate schedule as . We run experiments on between 4 and 8 Nvidia V100 GPUs with mini-batches of between 10K and 100K target tokens following . Code to reproduce our results can be found at https://github.com/facebookresearch/flores. Preprocessing and Evaluation We tokenize Nepali and Sinhala using the Indic NLP Library. 12 For the PBSMT system, we tokenize English sentences using the Moses tokenization scripts. For NMT systems, we instead use a vocabulary of 5K symbols based on a joint source and target Byte-Pair Encoding (BPE; Sennrich et al., 2015) learned using the sentencepiece library 13 over the parallel training data. We learn the joint BPE for each language pair over the raw English sentences and tokenized Nepali or Sinhala sentences. We then remove training sentence pairs with more than 250 source or target BPE tokens. We report detokenized SacreBLEU (Post, 2018) when translating into English, and tokenized BLEU (Papineni et al., 2002) when translating from English into Nepali or Sinhala. Results In the supervised setting, PBSMT performed quite worse than NMT, achieving BLEU scores of 2.5, 4.4, 1.6 and 5.0 on English-Nepali, Nepali-English, English-Sinhala and Sinhala-English, respectively. Table 3 reports results using NMT in all the other learning configurations described in §4.2. There are several observations we can make. First, these language pairs are very difficult, as even supervised NMT baselines achieve BLEU scores less than 8. Second and not surprisingly, the BLEU score is particularly low when translating into the more morphologically rich Nepali and Sinhala languages. Third, unsupervised NMT approaches seem to be ineffective on these distant language pairs, achieving BLEU scores close to 0. The reason for this failure is due to poor initialization of the word embeddings. Table 3: BLEU scores of NMT using various learning settings on devtest (see §3). We report detokenized Sacre-BLEU (Post, 2018) for {Ne,Si}→En and tokenized BLEU for En→{Ne,Si}. Poor initialization can be attributed to to the monolingual corpora used to train word embeddings which do not have sufficient number of overlapping strings, and are not comparable (Neubig and Hu, 2018;Søgaard et al., 2018). Fourth, the biggest improvements are brought by the semi-supervised approach using backtranslation, which nearly doubles BLEU for Nepali-English from 7.6 to 15.1 (+7.5 BLEU points) and Sinhala-English from 7.2 to 15.1 (+7.9 BLEU points), and increases +2.5 BLEU points for English-Nepali and +5.3 BLEU points for English-Sinhala. Fifth, additional parallel data in English-Hindi further improves translation quality in Nepali across all settings. For instance, in the Nepali-English supervised setting, we observe a gain of 6.5 BLEU points, while in the semi-supervised setting (where we back-translate also to and from Hindi) the gain is 6.4 BLEU points. Similarly, in the unsupervised setting, multilingual training with Hindi brings Nepali-English to 3.9 BLEU and English-Nepali to 2.5 BLEU; if however, the architecture is pretrained as prescribed by Lample and Conneau (2019), BLEU score improves to 18.8 BLEU for Nepali-English and 8.3 BLEU for English-Nepali. Finally, the weakly supervised baseline using the additional noisy parallel data described in §4.1 improves upon the supervised baseline in all four directions. This is studied in more depth in Table 4 for Sinhala-English and Nepali-English. Without any filtering or with random filtering, BLEU score is close to 0 BLEU. Applying the a filtering method based on LASER scores (Artetxe and Schwenk, 2018) provides an improvement over using the unfiltered Paracrawl, of +5.5 BLEU points for Nepali-English and +7.3 BLEU points for Sinhala-English. Adding Paracrawl Clean to the initial parallel data improves performance by +2.0 and +3.7 BLEU points, for Nepali-English and Sinhala-English, respectively. Table 4: Weakly supervised experiments: Adding noisy parallel data from filtered Paracrawl improves translation quality in some conditions. "Parallel" refers to the data described in Table 2. Discussion In this section, we provide an analysis of the performance on the Nepali to English devtest set using the semi-supervised machine translation system, see Figure 1. Findings on other language directions are similar. Fluency of references: we observe no correlation between the fluency rating of human references and the quality of translations as measured by BLEU. This suggests that the difficulty of the translation task is not related to the fluency of the references, at least at the current level of accuracy. Document difficulty: we observe that translation quality is similar across all document ids, with a difference of 10 BLEU points between the document that is the easiest and the hardest to translate. This suggests that the random sampling procedure used to construct the dataset was adequate and that no single Wikipedia document produces much harder sentences than others. Original vs translationese: we noticed that documents originating from Nepali are harder to translate than documents originating in English. This holds when performing the evaluation with the supervised MT system: translations of original Nepali sentences obtain 4.9 BLEU while Nepali translationese obtain 9.1 BLEU. This suggests that the existing parallel corpus is closer to English Wikipedia than Nepali Wikipedia. Figure 1: Analysis of the Ne→En devtest set using the semi-supervised machine translation system. Left: sentence level BLEU versus AMT fluency score of the reference sentences in English; source sentences that have received more fluent human translations are not easier to translate by machines. Right: average sentence level BLEU against Wikipedia document id from which the source sentence was extracted; sentences have roughly the same degree of difficulty across documents since there is no extreme difference between shortest and tallest bar. However, source sentences originating from Nepali Wikipedia (blue) are translated more poorly than those originating from English Wikipedia (red). Documents are sorted by BLEU for ease of reading. Domain drift To better understand the effect of domain mismatch between the parallel dataset and the Wikipedia evaluation set, we restricted the Sinhala-English training set to only the Open Subtitles portion of the parallel dataset, and we held out 1000 sentences for "in-domain" evaluation of generalization performance. Table 5 shows that translation quality on in-domain data is between 10 and 16 BLEU points higher. This may be due to both domain mismatch as well as sensitivity of the BLEU metric to sentence length. Indeed, there are on average 6 words per sentences in the Open Subtitles test set compared to 16 words per sentence in the FLORES devtest set. However, when we train semi-supervised models on back-translated Wikipedia data whose domain better matches the "Out-of-domain" devtest set, we see much larger gains in BLEU for the "Out-of-domain" set than we see on the "In-domain" set, suggesting that domain mismatch is indeed a major problem. Conclusions One of the biggest challenges in MT today is learning to translate low-resource language pairs. Research in this area not only faces formidable technical challenges, from learning with limited supervision to dealing with very distant languages, but it is also hindered by the lack of freely and publicly available evaluation benchmarks. In this work, we introduce and freely release to the community FLORES benchmarks for Nepali-English and Sinhala-English . Nepali and Sinhala are languages with very different syntax and morphology than English; also, very little parallel data in these language pairs is publicly available. However, a good amount of monolingual data, parallel data in related languages, and Paracrawl data exist in both languages, making these two language pairs a perfect candidate for research on low-resource MT. Our experiments show that current state-of-theart approaches perform rather poorly on these new evaluation benchmarks, with semi-supervised and in particular multi-lingual neural methods outperforming all the other model variants and training settings we considered. We perform additional analysis to probe the quality of the datasets. We find no evidence of poor construction quality, yet observe that the low BLEU scores are partly due to the domain mismatch between the training and test datasets. We believe that these benchmarks will help the research community on low-resource MT make faster progress by enabling free access to evaluation data on actual low-resource languages and promoting fair comparison of methods. B Statistics of automatic filtering and manual filtering Figure 2: Histogram of averaged translation quality score. We ask three different raters to rate each sentence from 0-100 according to the perceived translation quality. In our guidelines, the 0-10 range represents a translation that is completely incorrect and inaccurate; the 11-29 range represents a translation with few correct keywords, but the overall meaning is different from the source; the 30-50 range represents a translation that contains translated fragments of the source string, with major mistakes; the 51-69 range represents a translation which is understandable and conveys the overall meaning of source string but contains typos or grammatical errors; the 70-90 range represents a translation that closely preserves the semantics of the source sentence; and the 90-100 range represents a perfect translation. Translations with averaged translation score less than 70 (red line) are removed from the dataset. Figure 3: Histogram of averaged AMT fluency score of English translations. We ask five different raters to rate each sentence from 1-5 according to its fluency. In our guidelines, the 1-2 range represents a sentence that is not fluent, 3 is neutral, while the 4-5 range is for fluent sentences that raters can easily understand. Translations with averaged fluency score less than 3 (red line) are removed from the dataset. Table 7: Percentage of translations that did not pass the automatic and manual filtering checks. We first use automatic methods to filter out poor translations and send those translations back for rework. We then collect translations that pass the automatic filtering and send them to two human quality checks, one for adequacy and the other for fluency. Note that the percentage of sentences that did not pass manual filtering is among those sentences that passed the automatic filtering. In the past, the assembly that advised the king were called 'parliament'. B In old times the counsil that gave advice to the king was called 'parliament'. System In old times the council of counsel to the king was 'Senate'. References A As a worker African Mandela joined the Congress party. B He joined the African National Congress as a activist. System As a worker, he joined the African National Congress. Source Iphone users can and do access the internet frequently, and in a variety of places. References A . B Source In Serious meets, the absolute score is somewhat meaningless. References A Threatening, physical violence, property damage, assault and execution are these punishments. B Threats, bodily violence, property damages, assaults and killing are these punishments. System Threats, physical harassment, property damage, strike and killing this punishment. References A After education priests leave ordination in order to fulfill duties to the family or due to sickness. B Sangha is often abandoned because of education or after fulfilling family responsibilities or because of illness. System After education or to fulfill the family's disease or disease conditions, the companion is often removed from substance. Table 9: Examples of sentences from the En-Ne, Ne-En, En-Si and Si-En devtest set. System hypotheses (System) are generated using the semi-supervised model described in the main paper using beam search decoding.
7,392
2019-02-04T00:00:00.000
[ "Linguistics", "Computer Science" ]
Effect of Temperature and Pressure of Supercritical CO2 on Dewatering, Shrinkage and Stresses of Eucalyptus Wood Supercritical CO2 (SuCO2) dewatering can mitigate capillary tension and reduce wood collapse. In this study, Eucalyptus urophylla × E. grandis specimens were dewatered by SuCO2 at temperatures of 35, 40 and 55 °C, in pressures of 10 and 30 MPa, respectively, for 1h. Effects of temperature and pressure on dewatering rate, moisture content (MC) distribution and gradient, shrinkage and residual stress of wood after dewatering were investigated. The results indicate that the SuCO2 dewatering rate is much faster than that of conventional kiln drying (CKD). The dewatering rate increases with increasing of temperature and pressure; however, pressure has a significant influence, especially for the high-temperature dewatering process; the MC distribution after 1h dewatering is uneven and MC gradients decrease with reducing of mean final MC of wood. MC gradients along radial direction are much smaller than that in tangential direction; collapse of wood significantly reduces after dewatering due to SuCO2 decreasing the capillary tension, and residual stress of wood during dewatering is mainly caused by pressure of SuCO2, which decreases with increasing temperature. SuCO2 dewatering has great potential advantages in water-removal of wood prone to collapse or deformation. Introduction Eucalyptus species are planted in large areas in China due to their short growth cycle and strong adaptability; they have become the most important plantation wood species, producing a wide range of renewable materials. Eucalyptus wood is mainly used as raw materials in pulp, paper and wood-based panels, such as plywood, fiber boards and particle boards [1,2]. Owing to having relatively good mechanical performance, recently there has been significant interest in increasing the amount of eucalyptus wood as a resource of higher value-added solid wood products [3][4][5]. However, eucalyptus woods are predominantly available from short rotation cycles, which are mainly composed of juvenile wood and small-diameter logs [6]. Thus, timbers and lumbers of eucalyptus species are inherently difficult to process due to their higher variability, high growing tensions and poor permeability. Particularly, some severe problems arise from the convective drying process, such as intense collapse, internal checks and high internal drying stresses, which are responsible for reducing the yield of timber manufacturing [3,7,8]. Wood physical and mechanical characteristics are intensively related to water [9][10][11][12][13]. Capillary tensions occurring in convective drying are responsible for collapse and internal checks of timbers from eucalyptus wood. Woods collapses when cell walls cannot resist the capillary tensions caused by free water rapid migrating from cell lumens [14][15][16][17][18]. Thus, solutions have been developed and tested to reduce or prevent severe collapse of eucalyptus wood [19][20][21]. One the one hand, drying wood at a low temperature as slowly as possible may relax internal tensions or promotes collapse recovery during convective drying. On the other hand, collapse and internal checks may be mitigated using special approaches, such as freeze-drying [22] or supercritical CO 2 (SuCO 2 ) dewatering [23], which may remove or eliminate the capillary tension during free water migration. As is known, SuCO 2 fluid is an excellent transfer medium, which has wonderful properties, such as excellent solubility and heat transfer, non-toxicity, non-flammability, high recovery rate and strong process selectivity medium. As a green wood processing medium, SuCO 2 has been used successfully in wood industries, such as wood preservation, dying, extraction and thermochemical conversion due to its advantages of efficiency and environmental friendliness [24][25][26]. Conventional kiln drying (CKD) are widely used in the world, but resulting in excessive greenhouse gas and primary organic aerosols emissions for burning fossil fuel to obtain heat and steam [27], recently, dewatering wood using clean SuCO 2 fluid has been reported [28]. The water-removing mechanism of SuCO 2 fluid is attributed to the pressure difference between the CO 2 of the supercritical phase and the gas phase. Dewatering wood using SuCO 2 fluid differs from water evaporation in CKD of wood and may eliminate negative water tension maximally. Few collapses and cracks in wood were found after dewatering using SuCO 2 [29][30][31]. Thus, this dewatering method has great potential in such timbers as eucalyptus and poplar prone to collapse. There were some studies [32][33][34] investigating wood dewatering using SuCO 2 , but few investigations were related to the refractory eucalyptus wood and the effect of SuCO 2 on shrinkage and drying stress [35,36]. In the present paper, Eucalyptus urophylla × E. grandis wood was dewatered using SuCO 2 , at 35, 40 and 55 • C, in 10 and 30 MPa, respectively, for 1 h. The focus is, in particular, the systematical investigation into the interrelationship among temperature and pressure of SuCO 2 and dewatering rate, moisture distribution, drying stress and shrinkage during dewatering, will provide theoretical and practical support for improving the yield of eucalyptus timber manufacturing. Materials Green wood of Eucalyptus urophylla × E. grandis was supplied from Guangxi Provence, China. The trees were turned into logs and sealed with plastic films, and then were delivered to the wood Lab of Nanjing Forestry University. Thereafter, the logs were processed into boards with dimensions of 25 (R) × 30 (T) × 1000 (L) mm; after that, the boards were produced into end-matched specimens of 25 (R) × 30 (T) × 100 (L) mm for the subsequent SuCO 2 dewatering tests. The specimens were free of knots, and the initial moisture contents (MC) was about 110%. Dewatering Test Each test had three end-matched specimens, whose mass and dimensions were measured prior to test. The specimens were inserted into the extraction vessel, and the dewatering tests were conducted according to the schedule in Table 1. For each run, the specimens were in full contact with SuCO 2 for 60 min after the temperature and pressure reached the setting values. Then, the pressure of CO 2 was decreased to atmospheric pressure (0.1 MPa) in 10 min due to escaping CO 2 gas; thereafter, the specimens were taken out from the drying vessel for a further cessation of CO 2 emission in room temperature. Finally, the specimens were used for the subsequent measurement of MC distribution, drying stress and shrinkage. Moisture Content and Distribution Measurement The initial MC prior to test and final MC after dewatering of wood were determined according to the China National Standard (GB/T 1931-2009). The MC samples were dried in an oven at (103 ± 2) • C until the absolute dry mass was obtained. MC was calculated according to Equation (1). After dewatering, three 100-mm specimens were taken out from the vessels, and two 5-mm slices were cut from each specimen for MC and its distribution determination ( Figure 2). Each slice was divided into 25 pieces via marking cross lines. Thereafter, the slice was cut into 25 wood blocks using a knife. The average MC and its distribution were determined using the exact MC of each block. where W is the MC, (%); m 1 is the initial mass, (g); and m 0 is the absolute dry mass, (g). Shrinkage Measurement In this study, the shrinkage of wood after dewatering was determined based on the area in transverse section of the specimen [37]. As shown in Figure 2, two 2-mm slices were sawed from each specimen and then were scanned into images (300 dpi; Bit depth: 24) by a scanner (CanoScan LiDe 700F). Adobe Photoshop (Adobe Systems Inc., San Jose, CA, USA) was used to measure the pixel of the scanned images. The area of each slice was determined using the pixel of the scanned image. The shrinkage of the slices was calculated using Equation (2). where β is shrinkage, (%); P 0 is the pixel of the image of the initial slice sawing from two ends of specimen prior to dewatering, (px); and P 1 the pixel of the image of the slice sawing from the specimen after dewatering, (px). Residual Stress Measurement The residual stresses after dewatering of wood were measured using the prong test (GB/T 6491-2012). As shown in Figure 2, one 10-mm thickness slice was sawed from the middle of specimen and was employed for residual stress test. The slice was cut into a sample with a prong shape as shown in Figure 3. The initial thickness S of the specimen and the length L of prong edge were measured using a caliper (0-200 mm/0.01 mm). Thereafter, the slices were dried in an oven at (103 ± 2) • C for 3 h and then placed in ventilation place at room temperature for 24 h. After conditioning, the final dimension of S 1 was measured again using a caliper. The stress value Y is calculated using Equation (3): where, Y is the residual stress value, (%); S is the initial thickness of the slices, (mm); S 1 the final thickness of the slices, (mm); and L is the prong length of slices, (mm). Figure 4 shows the dewatering rate of Eucalyptus urophylla × E. grandis specimens after 1 h SuCO 2 dewatering. Moreover, the initial and final MCs of the specimens are also presented in the figure. The dewatering rate was compared with the drying rate of 5.5% per hour in CKD at 50 • C temperature and 84% relative humidity (RH) in previous study [35]. The dewatering rate using SuCO 2 is 5.2 to 11.7 times that of CKD. Fast removal of water in wood is important for timbers, which can prevent attacks of insects, shorten drying time and reducing storage cost of materials. Thus, SuCO 2 dewatering is beneficial for wood industry in these cases. For 10 MPa pressure, temperatures of 35, 40 and 50 • C dewatering, free water in wood was dewatered by 28.5, 32.6 and 34.8% per hour, respectively, and dewatering rate increased slowly with temperature rising. Similar results were also observed in dewatering at 30 MPa, 35 and 40 • C; however, in case of 55 • C, dewatering rate increased significantly. For the same temperature, dewatering rates increased more at higher pressure conditions. The average dewatering rate at 30 MPa is about 1.6 times to that at 10 MPa. All these findings suggest that dewatering using SuCO 2 is much faster than CKD; pressure significantly affects dewatering rate, and temperature has minor effect on dewatering rate at lower pressure. This finding is in agreement with previous report [28]. However, the effect of temperature on dewatering rate became obvious at higher pressure of 30 MPa in this study. Dewatering rate relates to concentrations of dissolved CO 2 and wood permeability [38]. Higher pressure accelerates dissolving of CO 2 , increasing the concentrations of CO 2 of the free water present in wood cell cavities [39]. During the decompression process, more CO 2 gas bubbles are generated in the free water of wood cell cavities, which expel water quickly from wood. Additionally, wood permeability affects CO 2 penetrating into wood and removal of water from wood. Wood permeability was improved by higher pressure of SuCO 2 [40], which benefits from penetration of CO 2 into wood and water removal from wood, thus resulting in higher dewatering rate at higher pressure of SuCO 2 dewatering. The color gradients in Figure 5 show MC gradients of wood after dewatering. The color bands are the same or similar in radial direction compared with that in tangential direction, especially for the left part of 10 MPa pressure. This means that MC gradients along radial direction are much smaller than those in tangential direction. Moreover, MCs were higher in the central parts and lower in the surface parts of wood. MC gradients along tangential direction were greater, indicating water in wood was dewatered mainly in this direction. This result coincides with previous studies [32,36,41]. These phenomena could be explained by the bordered pits that control the water migration between connected cells. There are more bordered pits in radial cell walls of wood, resulting in further possible paths for water migration along tangential direction. During SuCO 2 dewatering, the dissolved CO 2 expands to bubbles of gas when releasing the pressure; free water is mainly expelled along tangential direction by bubbles of gas through the bordered pits connecting cell lumens towards the surfaces of wood. Figure 7 shows the effect of pressure and temperature on shrinkage of wood after 1 h SuCO 2 dewatering. Theoretically, wood shrinks as MC decreases to fiber-saturated point (FSP); however, shrinkages of wood were observed after SuCO 2 dewatering in each test when their MCs were over FSP ( Figure 6). Collapse, an abnormal shrinkage, occurs when MC of wood is higher than FSP, which generally results in abnormal deformation. Severe collapses cause enormous loss of timber and high costs for industries. Therefore, the shrinkages in Figure 7 that occurred in each SuCO 2 dewatering indicate collapse of wood. However, the collapses of wood are only between 0.8% and 0.25%, which are much smaller than that of 2.8% in CKD and 3.6% in oven drying [35]. These results suggest that SuCO 2 dewatering may mitigate wood collapse and is in agreement with previous study [28]. Thus, SuCO 2 dewatering is an effective technology for improving timber drying quality. The less severe collapse is mainly attributed to the mechanism of SuCO 2 dewatering, which decreases capillary tension due to the fact free water in wood is expelled by CO 2 bubbles during decompression [32]. Additionally, collapse increases significantly with temperature at higher pressure dewatering. Figure 8 is the residual stress of wood after 1 h SuCO 2 dewatering test. The drying stresses at higher pressure were much greater than those at lower pressure and decreased with increasing temperature. These findings suggest that pressure mainly affects residual stress of wood in SuCO 2 dewatering process. Uneven shrinkage of wood causes stresses when MC of wood is lower than FSP with great gradients [42,43]. However, greater moisture content gradients also resulted in greater stresses during drying under highpressure steam conditions when MC of wood is above FSP [44]. In this study, greater MC gradients in 10 MPa dewatering (Figures 5 and 6) resulted in smaller residual stresses, indicating residual stresses are caused mainly by pressure of supercritical CO 2 when MC of wood is above FSP. Conclusions The results show that SuCO 2 dewatering is much faster than that of conventional kiln drying (CKD). Both temperature and pressure have an effect on dewatering rate during SuCO 2 dewatering, but pressure has a significant influence, especially on the hightemperature dewatering process. Moreover, the dewatering rate improves along with increasing temperature and pressure, while MC gradients of wood decline with decreasing wood mean final MC after 1 h dewatering. MC gradients along a radial direction are much smaller than that in a tangential direction and present uneven distribution in wood. The collapse of wood significantly reduces after dewatering due to capillary tension reduction caused by SuCO 2 . Residual stresses of wood during dewatering are mainly caused by the pressure of SuCO 2 , which decreases with increasing temperature. Owing to the fast dewatering rate and lowered collapse, SuCO 2 dewatering has great potential advantages in water removal of wood prone to collapse or deformation. Although temperature and pressure affect the characteristics of dewatering, shrinkage and stresses of wood, pressure has significant impacts. Parameters of pressure and temperature should be optimized to obtain high-quality eucalyptus timbers but reduce operating costs.
3,557.4
2021-09-18T00:00:00.000
[ "Materials Science", "Environmental Science" ]
Commentary: Developmental Constraints on Learning Artificial Grammars with Fixed, Flexible, and Free Word Order A long standing hypothesis in linguistics is that typological generalizations can shed light on the nature of the cognitive constraints underlying language processing and acquisition. In this perspective, Nowak and Baggio (2017) address the question of whether human learning mechanisms are constrained in ways that reflect typologically attested (possible) or unattested (impossible) linguistic patterns (Moro et al., 2001; Moro, 2016). Here, I show that the contrasts in Nowak and Baggio (2017) can be explained by language-theoretical characterizations of the stimuli, in line with a relatively recent research program focused on studying phonological generalizations from a mathematical perspective (Heinz, 2011a,b). The fundamental insight is that linguistic regularities that fall outside of certain complexity classes cannot be learned, due to computational properties reflecting implicit cognitive biases. A commentary on Developmental Constraints on Learning Artificial Grammars with Fixed, Flexible and Free Word Order by Nowak, I., and Baggio, G. (2017). Front. Psychol. 8:1816. doi: 10.3389/fpsyg.2017 A long standing hypothesis in linguistics is that typological generalizations can shed light on the nature of the cognitive constraints underlying language processing and acquisition. In this perspective, Nowak and Baggio (2017) address the question of whether human learning mechanisms are constrained in ways that reflect typologically attested (possible) or unattested (impossible) linguistic patterns (Moro et al., 2001;Moro, 2016). Here, I show that the contrasts in Nowak and Baggio (2017) can be explained by language-theoretical characterizations of the stimuli, in line with a relatively recent research program focused on studying phonological generalizations from a mathematical perspective (Heinz, 2011a,b). The fundamental insight is that linguistic regularities that fall outside of certain complexity classes cannot be learned, due to computational properties reflecting implicit cognitive biases. DEVELOPMENTAL CONSTRAINTS ON LEARNING In order to test whether adults and children have different biases toward typologically plausible patterns, Nowak and Baggio (2017) construct 4 finite state grammars imposing varying constraints on word-order (fixed: FXO1 and FXO2; flexible: FLO; and free: FRO), instantiated over two wordclasses: shorter, more frequent words (F-word) or longer, less frequent ones (C-words). Participants were asked to differentiate between strings produced by the grammar they had been trained on, and strings produced by a different grammar (e.g., FXO1 vs. FLO). Adults succeeded in recognizing fixed and flexible word-order strings (Experiment 1: FXO1 vs. FLO) and failed in recognizing free word-order strings (Experiment 2: FXO2 vs. FRO). In contrast, children could recognize flexible word-order and free word-order strings, but not fixed word-order strings (Experiment 3 and 4, replicating the contrasts of Experiment 1 and 2). The authors attribute these results to the inability of children to acquire typologically implausible grammars, suggesting that adults either have distinct constraints on language learning, or are able to employ more general learning strategies. SUBREGULAR COMPLEXITY Nowak and Baggio (2017) control for information-theoretical differences (e.g., Shannon entropy; Shannon, 1948) among strings to explicitly refute computational explanations of their results. Crucially, a different computational measure-based on language-theoretical characterizations sensitive to structural properties of the grammars-is dismissed by assuming that the finite-state grammars generating the stimuli lead to languages of equivalent complexity (i.e., regular languages). This latter assumption is grounded in the Chomsky Hierarchy (Chomsky, 1956), which divides languages (string-sets) into nested regions of complexity (classes) based on the expressivity of the grammars generating them. However, while regular languages were originally treated as a monolithic unit, it has been shown that they can be decomposed into a finer-grained hierarchy of languages of decreasing complexity-the Subregular Hierarchy (McNaughton and Papert, 1971;Rogers et al., 2010). A case has been made for the relevance of this classification for cognition (Rogers and Pullum, 2011;Heinz and Idsardi, 2013;Rogers et al., 2013). Recently, it was posited that the complexity of human language patterns is bound by classes in this hierarchy (the Subregular Hypothesis; Heinz, 2010;McMullin, 2016;Graf, 2017), which have been shown to make valuable generalizations across different domains (Aksënova et al., 2016;Aksënova and De Santo, 2017). It also appears that the simpler classes in the hierarchy are more easily learnable by humans (Hwangbo, 2015;Lai, 2015;Avcu, 2017). Here, my focus is on Strictly k-Local (SL k ) languages, which define strings in terms of finite sets of allowed k-gramscontiguous sequences of symbols of length k. Consider CFCFC and CFCFCC, two well-formed strings for FLO. A strictly k-local grammar is constructed by listing the smallest set of k-grams needed to distinguish between well-formed and ill-formed strings (e.g., * FCFCFC, * CFCFF): Language complexity is measured not by the size of the grammar, but by the minimal length (k) of the substrings needed to generate all (and only) its well-formed strings. Thus, FLO is a Strictly 2-Local (SL 2 ) language. Similarly, FRO is SL 1 , FXO1 is SL 3 , and FXO2 is SL 4 (cf. Figure 1). Importantly, SL languages form a proper hierarchy in k: FRO is then the simplest language, while FXO2 is the most complex. We can now interpret the learnability differences shown for adults vs. children, in light of the subregular complexity of the target string-sets. The contrast between FXO1 and FLO (Experiment 1 and 3) shows that SL grammars are equivalently easy for adults independently of the dimension of the k-grams; while children seem unable to correctly generalize over grammars with complexity greater than SL 2 . Languagetheoretical considerations also allow for a deeper understanding of the contrast between FXO2 and FRO (Experiment 2 and 4). In Experiment 2, adults perform well when trained over FXO2: if adults can easily learn SL grammars of any size, this is not an unexpected result. What should come as a surprise is the low performance on FRO, the simplest SL 1 grammar. However, consider that by construction FRO allows for any possible combination of symbols from the alphabet. Therefore, the set of strings generated by FXO2 is a proper subset of the set generated by FRO. Low performance of adults trained on FRO is then expected: since strings from FXO2 are also possible strings for FRO, participants will recognize every string as grammatical, and perform worse on the recognition task. Keeping in mind this possible confound, Experiment 4 (low accuracy when trained on FXO2 vs. FRO) suggests that children might be biased in favor of less restrictive and computationally simpler grammars. Nowak and Baggio (2017) present an interesting investigation of developmental biases in language learning mechanisms. I argue that a subregular characterization of their stimuli can help interpret learning differences between adults and children, thus suggesting that the nature of the observed biases is in fact intrinsically computational. From this perspective, unlearnable patterns would be those requiring computational resources that exceed what is allowed for a specific cognitive subdomain. What emerges is a strong parallel between language-theoretical approaches, and a research program focused on understanding possible/impossible patterns in human languages. Thus, as Jäger and Rogers (2012) suggest, closer collaborations between cognitive scientists and formal language theorists would improve the design and interpretation of artificial grammar experiments targeting human language biases. AUTHOR CONTRIBUTIONS AD reviewed the literature, developed the theoretical stance, and wrote the manuscript. ACKNOWLEDGMENTS The author would like to thank Alëna Aksënova, John E. Drury, Thomas Graf, and Jon Rawski for helpful remarks.
1,673
2018-03-06T00:00:00.000
[ "Linguistics" ]
Transient receptor potential channels’ genes forecast cervical cancer outcomes and illuminate its impact on tumor cells Introduction: In recent years, there has been a strong association between transient receptor potential (TRP) channels and the development of various malignancies, drug resistance, and resistance to radiotherapy. Consequently, we have investigated the relationship between transient receptor potential channels and cervical cancer from multiple angles. Methods: Patients’ mRNA expression profiles and gene variants were obtained from the TCGA database. Key genes in transient receptor potential channel prognosis-related genes (TRGs) were screened using the least absolute shrinkage and selection operator (LASSO) regression method, and a risk signature was constructed based on the expression of key genes. Various analyses were performed to evaluate the prognostic significance, biological functions, immune infiltration, and response to immunotherapy based on the risk signature. Results: Our research reveals substantial differences between high and low-risk groups in prognosis, tumor microenvironment, tumor mutational load, immune infiltration, and response to immunotherapy. Patients in the high-risk group exhibited poorer prognosis, lower tumor microenvironment scores and reduced response to immunotherapy while showing increased sensitivity to specific targeted drugs. In vitro experiments further illustrated that inhibiting transient receptor potential channels effectively decreased the proliferation, invasion, and migration of cervical cancer cells. Discussion: This study highlights the significant potential of transient receptor potential channels in cervical cancer, emphasizing their crucial role in prognostic prediction and personalized treatment strategies. The combination of TRP inhibitors with immunotherapy and targeted drugs may offer promise for individuals affected by cervical cancer. Introduction Cervical cancer (CC) is the predominant malignant tumor affecting the female reproductive system, with statistics showing that it accounts for 80% of all malignant tumors in this system.Additionally, there is a concerning trend towards younger individuals being diagnosed with cervical cancer (Cohen et al., 2019).In 2020, there were approximately 600,000 cases of cervical cancer diagnosed globally, resulting in 340,000 deaths (Stumbar et al., 2019;Sung et al., 2021).Despite advancements in treatment, the survival rate for patients with advanced cervical cancer remains low at around 15% due to its aggressive nature.Therefore, identifying new biomarkers for early detection and therapeutic targets is crucial for further research in this field. Transient receptor potential (TRP) channels are a family of ion channels which involved in several physiological processes, including nociception, temperature monitoring, and sensory transduction (Nilius et al., 2007).In 1969, researchers discovered TRP channels in a subspecies of Drosophila melanogaster.Transient receptor potential refers to the transient calcium ion influx that occurs when the drosophila variety is exposed to strong light for extended periods.TRP channels can be classified into six subfamilies based on their sequence homology: TRPA (ankyrin), TRPC (canonical), TRPM (melastatin), TRPML (mucolipin), TRPP (polycystin), and TRPV (vanilloid) (Caterina and Julius, 2001;Caterina and Pang, 2016;Moore et al., 2017).The function of TRP channels in cancer has attracted more attention recently.TRP channelrelated proteins expressed in various cancer cell types such as breast, prostate, lung, colon and pancreatic malignancies, have recently attracted more research attention.Specifically, TRPV6 has been shown to promote the invasion and migration of breast cancer cells (Cai et al., 2021).TRPV6 is linked to cancer cell death and proliferation in prostate cancer (Lehen kyi et al., 2007).TRPV3 has been demonstrated to facilitate cancer cell invasion and survival in lung cancer (Li et al., 2016).TRPM8 is upregulated in cancer cells and associated with a favorable prognosis in colon cancer (Pagano et al., 2023).In human pancreatic ductal adenocarcinoma tissue, TRPC1 is abundantly expressed and controls pancreatic ductal adenocarcinoma cell proliferation in a Ca 2+ independent way (Schnipper et al., 2022).Furthermore, TRP channels are involved in the interaction between cancer cells and the tumor microenvironment.Endothelial cells express TRPC1 and TRPC6, which promote angiogenesis, the process of forming new blood vessels that supply the tumor with nutrients (Li et al., 2017;Negri et al., 2019).Additionally, TRPV1 and TRPA1 expressed by immune cells are involved in the regulation of the body's immune response (Baral et al., 2018;Fattori et al., 2022). Several agonists and inhibitors of the TRP pathway have been developed and tested in preclinical studies.For example, the TRPV1 antagonist capsazepine has shown some effectiveness in inhibiting the proliferation and invasion of cervical cancer cells (De La Chapa et al., 2019).Additionally, the TRPV4 selective antagonist HC-067047 has been found to induce apoptosis and limit the growth of non-small cell lung cancer cells in vitro (Pu et al., 2022).Waixenicin A decreased the TRPM7 protein expression and inhibited the TRPM7-like currents in GBM cells, GBM cells showed increased apoptosis and decreased proliferation, migration, invasion and survival following treatment (Wong et al., 2020).Research into the TRP pathway has the potential to open up a new frontier in oncology treatment, particularly in the development of anti-cervical cancer drugs. In this study, we systematically assess the relationship between TRP channel-related genes (TRG) and cervical cancer and develop a reliable TRG-related prognostic signature that can be used as a validated biomarker to predict patient prognosis and immunotherapy response, offering a novel approach to tumor diagnostic and therapeutic approaches. Data collection Download from the Cancer Genome Atlas (TCGA, https://tcgadata.nci.nih.gov/tcga/) and Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih.gov/geo/)cervical cancer transcriptome RNA seq data and survival information were converted to TPM format and normalized using the "SVA package" (Leek et al., 2012).In addition, copy number variation (CNV) and single nucleotide variation (SNV) were downloaded from the TCGA database.The MSigDB database and the KEGG database were used to search for TRP channel-related gene sets, and 119 genes were obtained for subsequent analysis. Constructing TRG prognostic signatures Cox regression analysis was performed to correlate TRG with CC prognosis, and TRGs associated with survival were screened out.Lasso Cox regression analysis was performed to filter prognostic TRGs and construct a prognostic signature with a score of RiskScore = Σ (Expi * Coefi) (Engebretsen and Bohlin, 2019).The CC sample was divided into high and low-risk groups according to the median division of risk values in the prognostic signature, Kaplan-Meier survival analysis was performed, ROC curves were plotted to assess predictive efficacy, and the signature was assessed using univariate and multifactorial Cox regression combining clinical factors. Comprehensive analysis of TRG in terms of mutation, function, and pathway enrichment Gene Set Variation Analysis of TRG using the "RCircos package" and the "maftools package" (Zhang et al., 2013;Mayakonda et al., 2018).GO and KEGG enrichment analyses were performed using the "clusterProfiler" package in R. The "ConsensusClusterPlus package" was used to split TRG expression into two clusters based on TRG expression in cervical cancer (Wilkerson and Hayes, 2010).The "clusterProfiler" package was used, where p < 0.05 and q < 0.05, indicating significant enrichment of functional annotations (Yu et al., 2012). TRGs risk signature in immune cell infiltration and immunotherapy The "CIBERSORT package" assesses the relative proportions of immune cell types according to gene expression in the samples, ESTIMATE score, and tumor purity.Sensitivity to drugs was assessed using the "pRRophetic package" (Geeleher et al., 2014). Transwell experiment Matrigel was diluted with incomplete medium and added to the transfer chamber at 100 μL/well (at low temperature) at 37 °C for 1 h.After sufficient concretion of Matrigel for the invasion assay, cell lines from the experimental and control groups were collected and digested with trypsin and added to the transfer chamber with an incomplete medium.For migration experiments, matrigel was not added.600 μL of medium containing 10% fetal bovine serum was added to the lower chamber and incubated routinely for 24 h.The transfer chamber was removed and matrigel was wiped from the surface of the polycarbonate membrane with a cotton swab, gently washed with PBS, dried, and fixed in formaldehyde.Cells were stained with 1% crystalline violet and dried with a deionized rinse. Cell wound healing assay In cell wound healing experiments, cell lines were inoculated at 1 × 10 5 /mL in 6-well plates and after forming a monolayer of dense cells, straight lines were drawn with a 10 μL gun tip, and cell fragments were washed with PBS.After 2 consecutive days of observation, wound healing was observed by microscopy.ImageJ software calculated the extent of wound healing and the healing rate of the cell lines {wound healing rate at a given time = [(the initial wound area-48 h wound area)] *100%/initial wound area}. Statistical analysis GraphPad Prism 8.0 statistical software was used to analyze the data.Measures were expressed as mean standard deviation, with p < 0.05 indicating a statistically significant difference. Genetic variation in TRP channelrelated genes Thirty-one of the 289 cervical carcinoma samples had TRP mutations, mostly missense mutations.The most frequent mutation was found in PIK3CA (Figure 1A).CNV alterations were prevalent in most TRP channel-related genes (TRG), with most alterations concentrated in copy number amplification deletions, but some TRG deletions were more frequent (Figure 1C).The CNV distribution of TRG on the chromosomes was mapped (Figure 1B).TRG with higher amplification frequencies were found to have higher mRNA expression levels in cancerous tissues than in normal tissues, such as ILIRAP, PIK3CA, PPPICA, and PLCB3, suggesting that TRG may be tumor heterogeneous in normal versus cancerous cervical samples (Supplementary Figure S1A). Integrated analysis of biological behavior and immune infiltration of TRP A total of 30 TRG associated with prognosis were screened, these genes are named transient receptor potential channel prognosis-related genes (TRGs) (p < 0.05) (Figure 2A).The results revealed that the same prognostic influencing genes were mostly positively correlated, such as a significant positive correlation between the benign prognostic genes TRPC4, TRPV3, and TRPV4.To further explore the biological behavior of TRG in cervical cancer, the "Consensus Cluster Plus package" was used to divide TRG into two clusters based on their expression (Figure 2B; Supplementary Figure S1B).The Cluster A group is a high-risk group, and most of the pathway-related enrichments are positively associated with immune signaling pathways, such as Tolllike receptor signaling pathway, Fc epsilon RI signaling pathway, and p53 signaling pathway (Figures 2C, 3B).Correspondingly in the immunoassay, the level of immune infiltration was generally higher in the high-risk Cluster A group compared to the Cluster B group.These included some immunosuppressive cells such as CD8 + T cells, regulatory T cells (Tregs), macrophages, and mast cells (Figure 2D). TRGs risk signature development and identification A visualization of the TRG grouping and TRGs grouping of cervical cancer samples relative to the prognostic information of patients was presented in a sankey diagram (Figure 3A).The key genes (PLA2G4C, IL1B, ADCY1, PRKCB, and TRPC4AP) were screened by Lasso regression for variables.A prognostic signature was constructed based on the expression of these five genes and the patients in the sample were classified into high and low risk groups.Their risk score = (PLA2G4C*-0.189099697516017)+ (IL1B*0.254146497223738)+ (ADCY1*0.371333082335357)+ (PRKCB*-0.461389785536792)+ (TRPC4AP*0.6989900211777).Using survival estimates based on the optimal cutoff expression value for each gene, results showed that the high-risk group score group had a poorer prognosis (p = 0.001) and that the number of deaths increased with increasing risk score and ROC curves to evaluate the effect of the signature.The results of the analysis in the validation cohort (GEO cohort) were also as expected (Figures 3C-F; Supplementary Figures S2A-F).Combined with the clinical traits of the patients, a worse prognosis was found with a higher risk score in the T 1 -T 2 subgroup (p = 0.005) (Figures 3G, H). Functional enrichment analysis of TRGs GO enrichment analysis showed that the molecular function, biological process, and cellular component of TRGs were mostly gathered in information transfer, such as positive regulation of DNA-binding transcription factor activity, presynaptic cytosol, and calcium-dependent protein kinase C activity (Figure 4A).Pathway enrichment analyses revealed that TRGs were closely associated with calcium ion and metabolic pathways such as calcium ion transport, calcium ion transmembrane transport, and regulation of cytosolic calcium ion concentration and cAMP metabolic process (Figure 4B).Functional enrichment analysis of TRGs (A) GO enrichment analysis of TRGs in cervical cancer (B) KEGG enrichment analysis of TRGs in cervical cancer. TRGs risk score combined with tumor mutational burden to predict prognosis We analyzed differences in the genes with the highest frequency of the top 20 mutations in somatic mutations in the different risk groups.The high-risk group had a higher proportion of mutations compared to the low-risk group (Figures 5A, B).The highest mutation frequencies were found in TTN, PIK3CA, and KMT2C.The most common mutation type was also missense mutation.In the prognostic analysis in combination with TMB, the high TMB and high-risk score groups had a better prognosis and vice versa, which may provide new ideas for immunotherapy (Figure 5C). The great potential of the TRGs risk signature for therapy Immune scores, Stromal scores, and ESTIMATE scores were assessed between the different risk groups for comparison (Figure 6A) and there were differences in these aspects between the high and low-risk groups, with the high-risk group having lower scores.In addition, TRGs were correlated to varying degrees in most immune cells (Figure 6B).The TIDE score was used to evaluate the response to treatment with ICI in the different analyzed high and low-risk groups (Figure 6C).Given the differences in mutation and immune infiltration between the high and low-risk groups, patients were further assessed for the possibility of applying immune checkpoint inhibitors (ICI) by analyzing the association between immune cell proportion score (IPS) and risk signature.The high-risk group was also less effective in the IPS, IPS-PD1/PD-L1/PD-L2, IPS-CTLA4, and IPS-PD1/PD-L1/ PD-L2 + CTLA4 subgroups of treatment assessment (Figure 6D).The "pRRophetic package" was used to determine the effect of risk score on drug sensitivity.Common cervical cancer targeted drugs such as Sunitinib, Temsirolimus and Gefitinib are more effective in high-risk groups (Figure 6E).A single-cell study of gene expression in the tumor microenvironment, encompassing immune cells, stromal cells, malignant cells, and functional cells, revealed that the key gene in the model was TRPC4AP, which was more widely distributed in malignant cells (Figure 6F). TRP channel inhibitors' impact on cervical cancer The proliferation capacity of cervical cancer cells (HeLa and Siha) was assessed by the CCK-8 assay after 24 h of the action of various doses of TRP channel inhibitors to evaluate the influence of the TRP channel on the proliferation of cervical cancer cells.The development of cancer cells was suppressed by the pathway inhibitors in a dose-dependent manner, as illustrated in Figures 7A, B. At 36.01 and 54.20 μM concentration inhibitors, HeLa and Siha cells displayed around 50% suppression of cell proliferation, respectively.According to the results, HeLa and Siha cells' wound healing rates tended to decline with increasing inhibitor dose (Figures 7C, D).In comparison to the control group, the number of cervical cancer cells that migrated and invaded within 24 h reduced with increasing dosages of the inhibitor, according to the transwell assay (Figures 7E, F).Cervical cancer, a common and dangerous malignancy affecting women, is a complex disease regulated by multiple genes.Early symptoms are often subtle and diverse, making screening and physical examinations crucial (Liu et al., 2023).Patients may neglect the disease because it cannot be detected without cervical cancer screening and physical examination.After all, the disease's early symptoms are uncommon and its causes are varied (Gottschlich et al., 2023).According to statistics, the prevalence of cervical cancer is rising, therefore it's critical to raise women's knowledge of the disease and do the essential cervical cancer screening to detect the disease early and begin treatment (Siegel et al., 2023). There is growing evidence that TRP channels play a role in the development and progression of cervical cancer.One of the most well-studied TRP channels in cervical cancer is TRPV1.TRPV1 is overexpressed in cervical cancer tissues and cell lines and is associated with cervical cancer cell proliferation, migration, and invasion (Sánchez-Sánchez et al., 2015;Wang et al., 2022).In addition, through the activation of the β-catenin signaling pathway, TRPM4 has been demonstrated to promote cervical cancer cell proliferation and invasion (Armisén et al., 2011).Other TRP channels have also been associated with cervical cancer.In cervical cancer, TRPM7 expression regulated miR-543mediated cell cycle arrest, increased apoptosis in vitro, and inhibited tumor growth in vivo (Liu et al., 2019).Similarly, TRPM8 binding to Rap1 inhibited the adhesion of cervical cancer cells (Chinigò et al., 2022).Targeting TRP channels in cervical cancer is shown promising as a therapeutic target.Several TRP channel antagonists and agonists have been developed and tested undergoing preclinical studies testing (De La Chapa et al., 2019;Chai et al., 2022;Chen et al., 2023;Neuberger et al., 2023).However, there is a lack of an evaluation strategy based on the transient receptor potential channels to predict patient prognosis individually.Therefore, this study comprehensively analyzed the tumor microenvironment, immune infiltration, and potential impact of immunotherapy of TRG in cervical cancer to explore its intrinsic linkage, maximize the anti-tumor effect of TRG, combine chemotherapy, radiotherapy and immunotherapy, improve the efficacy of anti-tumor therapy. To establish a systematic multi-gene biomarker signature, Cox regression screening for TRG was followed by Lasso regression to establish a TRGs-based prognostic signature.The overall survival curve predicted that the high-risk group had a poor clinical outcome, while in comparison, patients with lower risk scores had a good prognosis, and the AUC in the ROC curve largely explains the reliability and applicability of this signature.Analysis of its biological behavior revealed that TRGs in the KEGG pathway are strongly associated with calcium signaling.This suggests that the genes screened in this signature play a key role in the transient receptor potential.The calcium signaling pathway is a source of energy on which a variety of cells rely for survival.Calcium imbalance is associated with tumor progressions, such as proliferation, invasion, and metastasis.Tumour immune dysfunction and rejection may be present due to higher TIDE scores in the high-risk group.The sensitivity to PD-1 and CTLA-4 inhibitors alone or in combination was found to be lower in the high-risk group than in the low-risk group in the IPS analysis.TIDE and IPS analyses suggest that immunotherapy, especially PD-1 and CTLA-4 inhibitors, is not recommended for patients in the high-risk group, but patients in the high-risk group with higher mutation loads may have better outcomes with immunosuppressive therapy.For patients who are not sensitive to immunological drugs, the treatment schedule should be changed in time, or targeted drugs may be used to improve the prognosis of patients to a greater extent.It has been reported that Gefitinib, as an inhibitor of EGFR, attenuates the effect of transient receptor potential melastatin 7 (TRPM7) on the migration and proliferation of vascular smooth muscle cells stimulated by epidermal growth factor.Therefore, indepth studies on the pathogenesis of TRGs in cervical cancer can provide new ideas for the development of new molecularly targeted drugs, and have great clinical translational value for the research of TRP channels in oncology drugs, which is still a gap in the development of targeted drug therapy against cervical cancer. We attempted to apply the effect of a TRP channel inhibitor propoxy)-4-methoxyphenethyl]-1Himidazole, a selective inhibitor of receptor-mediated Ca 2+ inward flow and voltage-gated Ca 2+ inward flow, currently mainly as a TRPC channel blocker, which shows the effect on cervical cancer cell growth.Notably, this drug promotes Pyk2 upregulation, hinders glioma progression and enhances focal adhesion formation by inhibiting TRPC4AP (Ding et al., 2006;Cheng et al., 2011).In gastric cancer cells, this inhibitor has demonstrated the ability to block endogenous TRPC6 channels, leading to cell cycle arrest in the G 2 /M phase and inhibiting cell growth (Cai et al., 2009). Our research intends to develop a prognostic risk model that would offer feasible options for prognosis screening and targeted therapy of CC patients.However, there are still some limitations of this study worth mentioning.The validation of the signature is limited to the data and the ROC is not at an optimal value due to the short follow-up time of the GEO data cohort.Therefore, we need to collect real clinical samples in subsequent studies to verify the accuracy of this signature in predicting patient prognosis.In addition, whether TRP channel inhibitors can be applied to patients with cervical cancer, and the connection between the selected TRP channel key factors and cervical cancer, more research on the molecular mechanism is needed.In vivo and in vitro experiments will further reveal how the TRP channel participates in the development process of cervix cancer.Overall, Our findings may enable stratification of CC patients with high risk, poor prognosis, and variable treatment sensitivity based on the risk signature, thereby improving clinical outcomes for CC patients. FIGURE 1 FIGURE 1 Genetic variation profile of Transient Receptor Potential channel-related gene (TRG) (A) Gene mutations carried in cervical cancer samples.(B) Copy number variation altered loci on chromosomes for TRG.(C) Frequency of copy number variation in TRG. FIGURE 2 FIGURE 2 Integrated analysis of biological behaviors and immune infiltration of TRP (A) TRG correlation analysis (B) TRG-based consensus matrixes of cervical cancer samples (k = 2) (C) GSVA displaying the biological behaviors' activation state in TRG clusters A and B (D) the clusters A and B for the abundance of immune infiltration (*p < 0.05, **p < 0.01, ***p < 0.001). FIGURE 3 FIGURE 3 Modeling of TRGs risk signature (A) Sankey diagram showing the distribution of sample TRG clusters, TRGs clusters, and prognostic information of patients (B) Differences in risk scores between TRG clusters (C,D) Kaplan-Meier curves showing the overall survival analysis of TRGs in TCGA (test group) (C) versus GEO (validation group) (D).(E,F) ROC curves test the effect of the model in the TCGA (E) and GEO (F) datasets.(G,H) Analysis of differences between high and low-risk groups in clinical tumor infiltration grading. FIGURE 5 FIGURE 5Waterfall plot of mutation frequencies in the high and low-risk groups of TRGs (A,B) Analysis of the difference in mutation frequency between the high-risk group (A) and the low-risk group (B).(C) Kaplan-Meier curves show the overall survival differences between the different TMB subgroups. FIGURE 6 FIGURE 6 Association of TRGs signature in immunotherapy and targeted therapy (A) Differences in the tumor microenvironment between high-risk and lowrisk groups.(B) Five prognosis-related TRGs and immune cell infiltration are correlated.(C) TIDE scores among different TRGs risk groups.(D) Differences in TRGs between TRGs risk groups.There were significant differences between the high-and low-risk groups.(E) Sensitivity analysis of drugs (Pazopanib, Imatinib, Docetaxel, Sunitinib, Temsirolimus & Gefitinib) between high and low-risk groups.(F) UMAP visualization of five model key genes in essential cervical cancer cell subpopulations (GSE168652).(*p < 0.05, **p < 0.01, and ***p < 0.001). FIGURE 7 FIGURE 7 Effect of TRP channel inhibitors on cervical cancer cells verified by in vitro experiments (A,B) Effect of TRP channel inhibitors on the viability of HeLa (A) and Siha (B) cervical cancer cells (C,D) Scratch assay to detect the migration ability of HeLa (C) and Siha (D) cells.(E,F) Transwell assay to detect the migration and invasion ability of HeLa (E) and Siha (F) cells.(**p < 0.01, and ***p < 0.001, ****p < 0.0001).
5,210.8
2024-05-09T00:00:00.000
[ "Medicine", "Biology" ]
Non-consumptive effects stabilize herbivore control over multiple generations Understanding the factors that influence predator-prey dynamics requires an investigation of oscillations in predator and prey population sizes over time. However, empirical studies are often performed over one or fewer predator generations. This is particularly true for studies addressing the non-consumptive effects of predators on prey. In a previous study that lasted less than one predator generation, we demonstrated that two species of parasitoid wasps additively suppressed aphid populations through a combination of consumptive and non-consumptive effects. However, the non-consumptive effects of one wasp reduced the reproductive success of the other, suggesting that a longer-term experiment may have revealed antagonism between the wasps. The goal of our current study is to evaluate multi-generation consumptive and non-consumptive interactions between pea aphids (Acyrthosiphon pisum) and the wasps Aphidius ervi and Aphidius colemani. Aphidius ervi is a common natural enemy of pea aphids. Aphidius colemani is a non-consumptive enemy that does not consume pea aphids, but negatively affects pea aphid performance through behavioral disturbance. Large field cages were installed to monitor aphid abundance in response to the presence and absence of both species of wasp over four weeks (two parasitoid generations). We found that the non-consumptive enemy A. colemani initially controlled the pea aphid population, but control in the absence of parasitism was not sustainable over the long term. Aphidius ervi suppressed pea aphids through a combination of consumptive and non-consumptive effects. This suppression was more effective than that of A. colemani, but aphid abundance fluctuated over time. Suppression by A. ervi and A. colemani together was complementary, leading to the most effective and stable control of pea aphids. Therefore, promoting a diverse natural enemy community that contributes to pest control through consumptive and non-consumptive interactions may enhance the stability of herbivore population suppression over time. Introduction Non-consumptive effects, also known as non-lethal effects or trait-mediated interactions, are changes in prey phenotype (e.g., behavior, morphology, or physiology) in response to the perceived threat of predation [1]. Non-consumptive effects can ultimately impact prey population size by altering prey fitness or migration [2,3]. In general, non-consumptive effects are considered common and can produce direct and indirect effects on herbivores and plants that are as strong as and sometimes stronger than consumptive effects [4]. However, a review of non-consumptive studies involving arthropods found that over half of the studies were completed in under 24 hours and only one third lasted for more than one week [5]. Thus, our understanding of how non-consumptive effects influence predator-prey population dynamics is largely based on studies that are limited to a single predator and/or prey generation, and often less [5,6]. Increasing the temporal scale of non-consumptive studies to accommodate reproduction provides insight into how non-consumptive effects may influence population cycles [7][8][9]. Studies of sufficient duration to include prey reproduction reveal carryover of non-consumptive effects on subsequent generations of prey [10][11][12]. For example, grasshoppers under chronic risk of predation alter their jumping mechanics to more quickly escape from spider predators [13]. However, the resulting offspring are smaller and cannot jump as far nor evade predators as effectively as the offspring of grasshoppers reared under no predation risk. Hence, traits that enhance survival in one generation may predispose the subsequent generation to higher predation risk. Some studies have explored legacy effects such as this for prey [14,15], but the same is not true for predators. Predator reproduction is rarely incorporated into empirical non-consumptive studies. The common expectation based on consumption is that a decline in prey abundance will lead to an increase in predator abundance as predators eat prey and convert prey biomass into predator biomass through reproduction [16][17][18][19]. However, non-consumptive suppression of prey, which decouples prey suppression from predator reproduction, can provide alternative explanations for common predator-prey patterns documented in nature [7][8][9]. From the predator perspective, one of the constraints to conducting multi-generational studies is the challenge of establishing and sustaining relevant treatments. The common approach is to create treatments where prey are subjected to either a proxy of predator presence or a modified non-lethal predator that cannot consume prey. For example, prey may be exposed to odors associated with predation [e.g., 12,20,21] or subjected to simulated predator attack by means of poking or disturbing prey without the predator being present [e.g., 22]. Although these methods work well for studies focused on the short-term impact of predators on prey populations, neither of these approaches utilize an actual predator, and thus there is no opportunity to explore the impacts of non-consumptive effects on multi-generational predator and prey interactions. Another common technique is to modify predator mouthparts by gluing or clipping them, so that predators can hunt and attack, but are physically unable to consume prey [e.g., 23,24]. In this case, prey interact with a living predator; however, the predator may not reproduce due to the inability to feed. Furthermore, if the predator does reproduce, the offspring will have fully-functioning mouthparts and be capable of feeding on prey, thus the treatment will not be maintained. Difficulty in teasing apart the role of consumption versus behavioral interactions while still allowing for predator reproduction, limits our ability to examine the multi-generational contribution of these interactions to predator and prey dynamics. A unique opportunity to quantify non-consumptive effects without artificial manipulation arises when prey respond defensively to non-enemy organisms due to the erroneous perception of predation risk [25,26]. The parasitoid wasp Aphidius ervi Haliday (Hymenoptera: Braconidae) is a common natural enemy of pea aphids (Acyrthosiphon pisum (Harris), Hemiptera: Aphididae) that reduces pea aphid abundance through a combination of consumptive and non-consumptive effects [27][28][29]. The closely-related wasp Aphidius colemani Viereck does not parasitize pea aphids [30], but still contributes to pea aphid suppression through non-consumptive behavioral interactions such as provoking the pea aphid escape response of dropping no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. from a host plant [26,31]. In a study that encompassed less than one parasitoid generation, we found that the natural enemy A. ervi and the non-enemy A. colemani additively suppressed the pea aphid population when present together, but a decline in the number of A. ervi pupae (mummies) that formed in the presence of A. colemani suggested the potential for long-term interference [29]. Aphidius ervi oviposition success declines when abiotic conditions enhance the pea aphid drop-off response [32], and the same may be true in response to contact with A. colemani. The short-term nature of the experiment did not allow for an investigation of the potential for interference to influence the size of future generations of A. ervi wasps or the magnitude and stability of pea aphid suppression over time. Here we explore how the addition of a non-consumptive enemy influences the interaction between a parasitoid natural enemy and its host over multiple generations. Based on our previous results, we hypothesize that the non-consumptive enemy A. colemani will reduce the reproductive potential of the natural enemy A. ervi. Thus, non-consumptive effects are predicted to interfere with parasitoid-host dynamics, weakening pea aphid suppression and reducing plant productivity over the long term. Methods We factorially manipulated the presence of A. colemani and A. ervi in large field cages and monitored the pea aphid population over four weeks, or two parasitoid generations. The study was conducted at Bradford Research Farm (Columbia, MO) from 17-June-2014 until 24-July-2014. This is a university-owned research farm. No permits were required, and studies did not involve endangered or protected species. Experimental units were 2 m x 2 m x 2 m field cages that were buried approximately 30 cm into the soil to deter any organisms from entering the cages. Inside each cage, we transplanted three rows of four eight-day-old fava bean plants (Vicia faba L.). After 7 d, 10 pea aphids were released on each fava bean plant. Each cage also contained three rows of four 40-day-old collard plants (Brassica oleracea L.), with 10 green peach aphids (Myzus persicae (Sulzer)) per plant. Aphidius colemani wasps do not parasitize pea aphids, thus green peach aphids and their host plants were included to maintain the A. colemani population throughout the study. Green peach aphids were not the primary focus of this study, but data regarding green peach aphid populations are included in supplementary material (S1 Appendix). After a 24-h settling period for the aphids, one of four treatments was randomly assigned to each cage: (1) control with no parasitoids present, (2) 30 adult A. ervi, (3) 30 adult A. colemani, or (4) a mixed treatment of 30 adult A. ervi and 30 adult A. colemani. Parasitoids were released at a sex ratio of 1:1. Treatments were replicated seven times. After the initial release of parasitoid wasps, pea aphid abundance and the number of pea aphid mummies were counted every 7 d over 4 wk for a total of four sampling periods. After the final count, the aboveground biomass of the pea aphids' fava bean host plants was harvested, dried, and weighed. Aphids used in this study were reared in cages in the Ashland Road Greenhouse facility on the campus of the University of Missouri (Columbia, MO, 16:8 L:D, 26-38˚C) for many generations prior to use. Pea aphids were originally collected from an alfalfa field (Medicago sativa L.) and were maintained on fava bean plants. Green peach aphids were collected from and reared on greenhouse collard plants. Aphidius ervi and A. colemani wasps were purchased from Rincon-Vitova Insectaries (Ventura, California). The main and interactive effects of the presence of A. ervi and A. colemani on the size of the pea aphid population over time were analyzed with a repeated-measures analysis of variance (ANOVA) (Proc Mixed, SAS v.9.4, SAS Institute, Cary, NC). The variance-covariance structures tested were: compound symmetry, heterogeneous compound symmetry, first-order autoregressive, heterogeneous first-order autoregressive, Toeplitz, and unstructured. Firstorder autoregressive variance-covariance structure was determined as the best fit according to the lowest AIC value. Temporal stability in pea aphid suppression was calculated using the coefficient of variation (standard deviation / mean) of pea aphid population size in each experimental unit over time. The main and interactive effects of the presence of A. ervi and A. colemani on the coefficient of variation of pea aphid abundance were analyzed using a two-way ANOVA. During the study, pea aphid populations grew to such a large size in the control treatment and the treatment where A. colemani was present alone that they over-exploited the fava bean plants and led to plant death. Plant death led to a dramatic decline in the pea aphid population size in subsequent sampling dates in these two treatments. To account for this, we also compared the coefficient of variation in pea aphid abundance between the two treatments where pea aphid populations did not overexploit the fava bean plants: where A. ervi was present alone and where both A. ervi and A. colemani were present. The cumulative abundance of pea aphid mummies was compared between the treatment where A. ervi was alone and the mixed treatment with both A. ervi and A. colemani present with a one-way ANOVA. The analysis only included two treatments because pea aphid mummies only formed in treatments where A. ervi was present. A. colemani cannot parasitize pea aphids and its effect is entirely non-consumptive [33]. To assess the indirect effect of wasps on plants, the main and interactive effects of A. colemani and A. ervi presence on dried aboveground biomass of fava bean plants at the conclusion of the experiment were analyzed using a two-way ANOVA. All variables were log-transformed to adhere to the assumptions of an ANOVA. Due to human error on the second count day (July 10), three values for pea aphid abundance from the control treatments were not included in the analyses. Results There was a three-way interaction between the presence of A. colemani, A. ervi, and the day of observation on pea aphid abundance (F 3,68.5 = 4.31, P = 0.0076, Fig 1), indicating that the interactions between the natural enemy and non-consumptive enemy wasps were not consistent over the course of the study. In the first two weeks of the study (July 3 and July 10), suppression by the two wasp species was additive (enemy � non-enemy interaction, July 3: F 1,24 = 0.21, P = 0.6512, July 10: F 1,21 = 1.00, P = 0.3291). In the last two weeks, suppression was synergistic (enemy � non-enemy interaction, July 17: F 1,24 = 8.70, P = 0.0070, July 24: F 1,24 = 7.15, P = 0.0135). However, this statistical interaction is likely not biologically relevant since pea aphids overexploited the host plants in the control treatment and the bottom-up effect of reduced plant quality led to a dramatic decline in the pea aphid population size. The natural enemy A. ervi reduced pea aphid population size (F 1,33.9 = 61.62, P < 0.0001), but suppression was not consistent over the course of the study, with a spike in pea aphid abundance on the third date (enemy � date interaction: F 3,68.5 = 8.76, P < 0.0001). The non-consumptive enemy A. colemani also did not exert consistent suppression on the pea aphid population over time (non-enemy � date interaction: F 3,68.5 = 6.18, P = 0.0009) and had no main effect on pea aphid population size (F 1,33.9 = 1.75, P = 0.1947). There was no evidence of an indirect interaction between A. ervi and A. colemani mediated by the presence of green peach aphids (the host for A. colemani) (S1 Appendix). A. ervi did parasitize green peach aphids at low levels in some cages. However, the rate of parasitism was not enough to suppress green peach aphid abundance relative to the no-wasp control. Furthermore, there was no interaction between the presence of A. ervi and A. colemani on the number of green peach aphid mummies formed. When all four treatments were included in the analysis, including those where the aphids overexploited the plants, there was a main effect of the presence of the non-enemy A. colemani on the coefficient of variation of pea aphid abundance (F 1,21 = 9.64, P = 0.0054; main effect mean ± SE: 1.38 ± 0.11 in the absence of A. colemani, 0.97 ± 0.09 in the presence of A. colemani), indicating that the presence of the non-consumptive enemy reduced the variability in the abundance of pea aphids over time, i.e., led to a more stable pea aphid population size. There was no main effect of the natural enemy A. ervi or an interaction between the enemy and the non-enemy on the stability of the pea aphid population size (F 1, 21 = 3.00, P = 0.0981; F 1, 21 = 1.65, P = 0.2135, respectively). However, when the two treatments where the aphid populations grew to such a large level that they killed their host plants were removed from the analysis, the coefficient of variation in pea aphid abundance was lower in the mixed treatment where both species of parasitoid were present, compared to when the enemy A. ervi was present alone (t 12 = 2.50, P = 0.0278; mean ± SE: 1.34 ± 0.14 in the absence of A. colemani, 0.83 ± 0.14 in the presence of A. colemani). The addition of the non-consumptive enemy A. colemani reduced the variability and increased the stability in pea aphid population size over time. The presence of the non-consumptive enemy A. colemani did not interfere with reproduction by A. ervi. There was no difference in the total pea aphid mummy formation by A. ervi when A. ervi was alone or in the mixed treatment with A. colemani (t 12 = 0.44, P = 0.6700, Fig 2). Fig 3B). The addition of either A. colemani or A. ervi led to greater dried aboveground plant biomass of fava bean plants. There was no interaction in the effects of the presence of A. colemani and A. ervi on the dried aboveground fava bean plant biomass (F 1, 24 = 0.00, P = 0.9788). Discussion Non-consumptive suppression of prey by predators is well documented, but our understanding of the importance of non-consumptive effects for predator-prey interactions is limited by the short duration of most experimental studies [5]. This is partly due to the logistical difficulty of creating treatments that tease apart non-consumptive from consumptive effects in a manner that will persist across predator generations. Taking advantage of the defensive behavioral response of pea aphids to non-enemy parasitoid wasps, we were able to quantify the non-consumptive effects of wasps on aphids over multiple generations. Consistent with previous short term studies [26], we found that non-consumptive effects alone were sufficient to initially suppress pea aphid abundance where only the non-enemy A. colemani was present. However, suppression was transient in the absence of consumptive effects and aphids escaped control over the long term. Contrary to our prediction, a non-consumptive enemy did not disrupt PLOS ONE parasitoid-host interactions [29]. Alternatively, the combination of consumptive and non-consumptive effects stabilized pea aphid suppression, yielding the lowest and least variable pea aphid population size over the long term. The ability of natural enemies, including predators and parasitoids, to reduce aphid population growth by stimulating defense behaviors is well documented [24,34]. More recent evidence suggests that aphid performance is also negatively affected by disturbance from nonenemy parasitoids and other non-predaceous, non-competitive commensal species [25,26,29]. For example, fruit flies negatively affect bird cherry-oat aphid (Rhopalosiphum padi) population growth because the stress response of aphids does not discriminate between enemies and flies in search of food [25]. In agreement with these previous short-term studies, we found reduced pea aphid abundance in the presence of the non-enemy wasp A. colemani, but only in the first two weeks of the study. Over the next two weeks, the pea aphid population eventually grew so large that the fava bean host plants were over-exploited and died. Pea aphids likely escaped control over the long term because A. colemani population growth is not coupled with that of non-prey pea aphids as predicted by consumptive predator-prey models [16][17][18][19]. Aphidius colemani wasps persisted in the system for the duration of the study, due to the presence of their green peach aphid prey on collard plants (S1 Appendix). Therefore, while many shortterm studies find that non-consumptive effects are strong and prevalent, longer term studies are necessary to understand whether and how the impacts of these interactions will scale up. Aphidius ervi is a relatively specialized natural enemy of pea aphids with populations that are often coupled to that of their hosts [35,36]. Accordingly, we found that A. ervi alone consistently maintained pea aphid populations at levels well below the no-wasp control. As in other studies [37], we found that the magnitude of pea aphid suppression by A. ervi fluctuated over time with an increase in aphid abundance in the third week. Wasp preference for particular aphid developmental stages and variation in host preferences across individual wasp females or females of different ages have been invoked as mechanisms to explain similar patterns in the past [38]. Although, some have questioned the importance of such individual behaviors in influencing overall aphid-parasitoid dynamics [39]. Contrary to our prediction, the addition of the non-consumptive enemy A. colemani did not disrupt control of pea aphids by A. ervi. We previously documented a reduction in A. ervi reproduction in the presence of A. colemani, which was attributed to behavioral interference [29]. We found no evidence of such interference in this longer-term study. Rather, the effects of the non-consumptive enemy and the natural enemy were complementary, with the addition of A. colemani preventing the temporary spike in aphid abundance at week 3 and reducing the variability of pea aphid population control over time. We attribute this response to temporal niche partitioning due to differences in development time of the two parasitoid species. Aphidius colemani develops, on average, two days faster than A. ervi [40,41]. As a result, adults of the non-consumptive enemy A. colemani were actively foraging in the environment and maintaining suppression of pea aphid abundance through behavioral interactions [26,31] during times when A. ervi was inactive and in the pupal stage. Therefore, the presence of the non-consumptive enemy A. colemani may provide insurance against pea aphid outbreaks at times when the consumptive enemy, A. ervi, is inactive or present at low densities. Many mechanisms have been explored to explain the stability of parasitoid and host interactions, including foraging behavior, spatial processes, and mutual interference [37,[42][43][44][45]. Our study demonstrates that behavioral interactions in the form of complementary consumptive and non-consumptive suppression by parasitoid wasps may also lead to increased stability of host populations over time. Previous studies have shown that non-consumptive interactions not only affect herbivore prey, but can also cascade down to indirectly impact plants [4,46,47]. In our study, the pea aphid natural enemy A. ervi controlled pea aphid populations, resulting in increased aboveground biomass of the fava bean plants. The addition of the non-consumptive enemy A. colemani led to more consistent suppression of pea aphid populations than when A. ervi was alone. However, the greater control of the pea aphid population was not reflected in an increase in fava bean plant biomass, although there was a trend for the highest plant biomass to be achieved when both species of wasp were present. Interestingly, the presence of the nonconsumptive enemy alone also increased fava bean biomass. Thus, despite the short-term nature of pea aphid suppression by the non-consumptive enemy, suppression was still sufficient to benefit fava bean plants. Our study demonstrates the importance of spatial and temporal scale in influencing the outcome of ecological interactions [48][49][50]. In a previous study done in small cages over one parasitoid generation, we saw evidence of potential antagonism between these two species of wasps and their combined consumptive and non-consumptive effects on pea aphid population size [29]. However, in this multi-generation study, we found the greatest pea aphid suppression when both species were present. In addition, we document a novel role for non-consumptive enemy species in the environment: to prevent outbreaks of herbivore populations at times when consumer densities are low [51,52]. A species that would otherwise not be included in a food web or trophic interaction contributed to enhanced herbivore suppression and increased plant productivity through non-consumptive mechanisms.
5,175.8
2020-11-10T00:00:00.000
[ "Biology" ]
What if Newton's Gravitational Constant was negative? In this work, we seek a cosmological mechanism that may define the sign of the effective gravitational coupling constant, {\em G}. To this end, we consider general scalar-tensor gravity theories as they provide the field theory natural framework for the variation of the gravitational coupling. We find that models with a quadratic potential naturally stabilize the value of {\em G} into the positive branch of the evolution and further, that de Sitter inflation and a relaxation to General Relativity is easily attained. Introduction In Newton's law of gravitation, the gravitational constant, G, is assumed to be positive. This is a question of choice, and apparently it was P. S. Laplace who introduced the constant for the first time in his Traité de Mécanique Céleste, in 1799 [1] as Originally, Newton had put forward both the proportionality of the gravitational centripetal force (in his words) to the quantity of matter of the two bodies in interaction, as well as the inverse proportionality to the square of their separation [2]. However, he did not explicitly introduce G [3] 1 , presumably due to the lack of an internationally accepted system of units. Of course, this is required since, after all, G adjusts the dimensions of both sides of the defining equation for the strength of the gravitational interaction and the sign of this denotes the attractive or repulsive character of the force. Odd enough, it was in 1798, one year before the publication of Laplace's treaty, that H. Cavendish measured G with a torsion balance, but just as a necessary step, of secondary importance, to weigh the density of the Earth [4]. This measure was made with the remarkable accuracy of 1%. The subsequent success of the gravitational law in tackling the motion of the celestial bodies of the solar system is well known, and, at the beginning of the 20th century, the only major problem was the anomaly in the precession of Mercury's perihelium, a mismatch first revealed by Le Verrier in 1855. It was Einstein's General theory of Relativity which not only solved this puzzle with flying colours, but also revolutionized our understanding of gravitation. One of the pillars of the theory is that we should recover Newtonian gravity when considering weak fields and bodies moving with low speeds when compared to the speed of light. Thus, G whose role in the theory is to couple the geometry to the matter content of the Universe, is taken to be positive and is a constant under this framework. In 1938, Dirac made an astounding proposal, dubbed the Large Number Hypothesis, according to which any dimensionless ratio between two fundamental quantities of nature should be of the order unity (for a more detailed account of the motivations see [5]). This led him to put forward that if G were to evolve with the Hubble rate of expansion of the Universe this would account for the present disparity of about 40 order of magnitude between gravitational and electromagnetic forces at the atomic level. This was the first time the variation of some fundamental constant was explicitly and seriously envisaged. Dirac's proposal was given a field theoretical realization, first by P. Jordan within a Kaluza-Klein type approach (thus involving extra-dimensions), and then, in 1961, by Brans-Dicke theory [3,6] motivated by Mach's principle. In both cases a dynamical scalar field couples to the spacetime curvature and thus plays itself a gravitational role. In the suite, a plethora of extended gravity theories that affect the coupling between the space-time geometry and the matter sector also prescribe the variation of this fundamental "constant" G [7,8]. In principle, within this framework, it becomes possible for G to change sign, trading, attracting into repulsive gravity, and conversely. This might happen either during the cosmological time evolution, or even conceivably it might happen at spatially separated regions of space-time. The concern about the sign of the gravitational constant has been envisaged as a constraint to be respected by the spectrum of modified gravity theories, but the focus has never been directed to devise a mechanism to assure its positiveness. For instance, Barrow [9] proposed that the formation of primordial black holes during the early stages of the Universe might retain "memory" of the value of the gravitational constant at the time of their formation, and hence exhibit diverse values of the latter depending on the instant of their formation, around t Prim ∼ 10 −25 s. In the present work we investigate the, somewhat heretical, possibility that the effective gravitational coupling might be negative within the general class of scalar-tensor (ST) gravity theories. We analyze a cosmological mechanism that determines the positiveness of the sign of G, even though it may exhibit transient periods in the negative region. We show that this cosmological device relies on the role of a cosmological potential, which reproduces a positive cosmological constant in the so-called Einstein frame. From this latter viewpoint it can be understood as another role of paramount importance of this remarkable constant. In Refs. [10,11] I. Roxburgh analysed the issues of the sign and magnitude of the gravitational constant, based on Einstein's correspondence principle which demands that Newtonian gravity be recovered in the weak field limit of the theory. His analysis is done in the framework of GR and is somewhat motivated by Mach's principle, leading him to conclude that G must be positive. Other studies which carry some relation to the present work are [12][13][14][15][16][17][18][19]. In this work we shall start by briefly looking at the implications of having G < 0 in cosmology, namely showing that inflation arises for a considerably large set of parameters, and that we obtain bouncing solutions that avoid the initial singularity when a cosmological constant is also considered. Then we analyze the cosmological behaviour of scalar-tensor theories to show how a subset of the solutions exhibit negative G, and how a cosmological potential provides us with a mechanism that favours positive G and eventually stabilizes its sign. In essence we will show that the presence of a cosmological constant in the Einstein frame provides such a mechanism for an extended set of varying G theories, which represents a relevant feature for the existence of a non-vanishing cosmological constant in the Einstein frame (and of a corresponding cosmological potential in the Jordan frame). Negative G in GR It must be said that if we envisage the trading of a positive G into a negative one within Einstein's General Relativity, we will be mutating its attractive nature into a repulsive one, and this avoids the need to rely on exotic matter, violating the strong energy condition, to produce inflationary stages. This is therefore an alternative ad-hoc device, akin to the Albrecht and Magueijo's varying speed of light to avoid the perplexing complications of the inflationary scenarios [20]. The down side of this way of producing repulsive gravity, is that once assumed, it is for ever. There would be no way of exiting inflation with canonical matter sources. The scalar-tensor scenario that we consider afterwards avoids the latter problem, and present us with a natural, and theoretically consistent framework for exploring the possible negativeness of G. Friedmann Models with a Single Fluid Consider the usual FLRW universes of the standard cosmological model, and take G = −|G| in Einstein's GR. We then have the following field equationṡ where dots denote derivatives with respect to the time, a is the scale factor of the Universe and ρ and p are the energy density and pressure, respectively. The signs on the right hand side are the opposite with respect to the usual ones. However, the Bianchi contracted identities are immune to this change of sign and the energy conservation equation is preserveḋ Thus, when the matter content satisfies the weak and strong energy conditions, ρ > 0, ρ + p ≥ 0, and ρ + 3p ≥ 0, we see from the Raychaudhuri Equation, (3), that the expansion is accelerated,ä ≥ 0. Yet, this inflationary behaviour is constrained by the Friedmann Equation (2). It can be easily verified that the single fluid solutions are forbidden when k = 0, +1, and are restricted to ρ ≤ 3/8π|G|a 2 when k = −1. Further, notice that the transformation |G| → −|G| which is performed in the Einstein field equations of the FLRW models, produces a system which mimics phantom matter provided the equation of state relating the pressure and the energy density of matter is such that p(ρ) → −p(−ρ) when ρ → −ρ, preserving the field equations (we remark that this happens to be the case for the barotropic equations p = (γ − 1)ρ which are usually considered; in addition the cosmography framework, as exposed in [21,22], also absorbs this transformation and is left unchanged). Model with a Cosmological Constant Consider a cosmological constant in addition to the perfect fluid for a metric with the signature − + ++. The field equations now reaḋ where λ = Λ/3. Recasting the latter equations in conformal time η defined by dη = dt/a(t) we get where denotes derivative with respect to the conformal time. Assuming that the matter content is a perfect fluid with equation of state (EOS) p = (γ − 1) ρ where γ is a constant that takes values in the range 0 < γ ≤ 2, we derive the exact solutions from Equation (8) da which yields Jacobi elliptic functions. Naturally, in the latter equation η 0 is an arbitrary integration constant that sets the origin of time. There are four cases that are of special interest: (i) Radiation, i.e., γ = 4/3, (ii) Dust, i.e., γ = 1, (iii) Stiff matter, i.e., γ = 2, and (iv) The coasting model γ = 2/3. A case which is of great interest is the case where we have a combination of pressureless matter and radiation together with a cosmological constant, since these are the 3 major components that best fit the expansion history of the universe (ΛCDM model) [23]. The corresponding dynamical system, from (9) reads where ρ 0 d is the current density of dust. In addition, the Friedmann constraint equation becomes: In Figure 1, we represent the phase diagrams depicting the qualitative behaviour of these negative G models for some choices of matter content (For a recent review of the methods of dynamical systems in cosmology see [24]). Analyzing the existence and nature of the fixed points, we classify the possible dynamical behaviours. Please note that we have compactified the phase diagrams using the transformation x = arctanh a and y = arctanh b, so that the boundary lines x = ±1 and y = ±1, respectively, correspond to a → ±∞ and b → ±∞. This allows us to devise the asymptotic solutions at infinity. The number and position of the fixed points in the finite region of the phase plane (a, b) is defined by the roots of Equation (12) when b = 0. Therefore, there will be at most three fixed points on the a axis (plus the fixed points at infinity which will not be on the a axis). In Figure 1 we display the qualitative behaviour of the model for the three spatial curvatures and use reasonable values for the parameters in Equation (12), which take into consideration the ΛCDM model 2 . We adopt Ω 0 One must though be wary that in the phase-diagrams of Figure 1 the half-plane corresponding to negative values of a is not physical, as it corresponds to a < 0. Yet its representation is useful, because it illustrates the complete behavior of the mathematical dynamical system underlying the physical scenario, regardless of the physical consistency of some of its parts. Moreover, in the present case it also allows comparison with the phase-diagrams of the scalar-tensor models. The qualitative behaviour is alike the one found for k = −1 case. The Figure 1c,d phase diagrams display the two possible behaviours of the k = +1 which translate the existence of a bifurcation associated with two different subsets in what regards the balance of parameters (the bifurcation occurs for 8π|G|ρ 0 d = 4/ √ 6λ). In the left one, Figure 1c, once again there are three fixed points at {a, b} = {0.662359, 0} plus the two fixed points at infinity. From a qualitative viewpoint we find the same behaviour as in the previous open models. However, in Figure 1d there are three fixed points on the x-axis, two saddle points and a center in between, the latter of which corresponds to a basin of oscillatory behaviour. Yet being located in the left-half plane, i.e., a < 0, its impact on the physical right-hand side is not qualitatively noticeable, apart from reducing the proportion of solutions which evolve towards the deS points at b = ±∞. Analyzing the fixed points of the compactified phase-diagrams, we find that for k = −1 and k = 0, there are only three fixed points, while for k = +1 there could be either three or five fixed points, counting the two critical points at infinity. In the former cases, when there are only three fixed points, the qualitative behaviour is the same independently of the value of k. Different spatial curvature indexes only distinguish through a horizontal shift of the location of the critical point on the horizontal axis. These fixed points are saddle points which correspond to unstable static solutions. In all k cases, there are solutions where a expands to infinity. This fact was foreseeable, because of the repulsive character of gravity when G is changed to −|G|. They correspond to asymptotic de Sitter solutions (deS), both in the future and in the past, upon time reversal. They reflect the eventual domination of the cosmological λ-term, overcoming the impact of the sign of G, yet the swapping of the sign of G enhances this domination. Obviously, according to this behaviour, the current accelerated expansion of the Universe is not a problem, but we have a gravity which is inconsistent at small scales with the weak field limit, and thus would be at odds with the Solar System behaviour [10,11]. However, the analysis pursued in this section is merely a previous step to assess the cosmological impact of a negative gravity. In the following section we shall consider the issue within the more appropriate framework of modified metric gravity theories which assume the variation of G. Scalar-Tensor Gravity Theories We now consider general scalar-tensor gravity theories given by the action where a potential term U(φ) of cosmological nature is considered. (We shall also use U(φ) = φ λ(φ)). The archetypal feature of this class of theories is the fact the Newton's gravitational coupling is G = 1/φ, and generically varies. The scalar field φ may be seen as the gravitational permittivity of the space-time [25]. The field equations are where T ≡ T c c is the trace of the energy-momentum tensor, T α β . When applied to the FLRW models we obtain Please note that the cosmological potential U(φ) = φ λ(φ) effectively reduces to a cosmological constant when λ(φ) = λ 0 = constant in this frame . We introduce the redefined variables and use conformal time dη = dt/a = dt/ √ X. Observe that in the definition of X, φ 0 is the value of φ at some initial condition which we shall normalize φ 0 = 1 without loss of generality. More importantly, observe that X < 0 when φ < 0, i.e., when G < 0. This is in fact the crucial detail which allows us to extend the study of the dynamics into the region where φ = 1/G is negative. The FLRW equations are then recast as The scalar-field equation The generalized Raychaudhuri equation where M is a constant defined by M ≡ 8πρ 0 /3 . When the potential U(φ) = λ 0 φ 2 , the latter equations reduce to These equations can be exactly integrated for the cases where the variables decouple, which are vacuum, radiation and stiff matter [26][27][28]. Indeed, from Equation (25) we see that in these cases where is an arbitrary integration constant which fixes the initial value of Y. However, for our present purposes, we only need to assess the qualitative behaviour of the dynamical system [19,[29][30][31][32]. The crucial point in our analysis of the sign of the gravitational coupling φ is that instead of choosing either the original so-called Jordan frame or the conformally transformed Einstein frame arising from rescaling the metric with a factor φ/φ 0 , where φ 0 is an initial value of φ, say, at present, we consider the variables (X, Y), where X = φa 2 /φ 0 reflects the actual sign of φ, as a 2 ≥ 0. An inspection of Equations (24)- (26) shows that when X → 0, Equation (24) is dominated by the scalar field term (Y X) 2 , which is constant for radiation, so that we expect the phase space trajectories to cross the X = 0 axes from right to left when X < 0, and from left to right when X > 0. Interestingly, Equation (26) shows that when X → 0, the dominant term is 3M(2 − γ) lim X→0 (X/φ) (4−3γ)/2 , so that it is actually the matter term the responsible for the turning around of the trajectories towards the positive side of X, and hence of φ > 0. Finally, the presence of a quadratic cosmological potential eventually dominates for large values of X and consequently stabilizes the sign of φ. In the following subsections we perform a qualitative analysis confirming this behaviour. Studies of scalar-tensor theories have been performed by one of the present authors using these techniques [14,16], and similar and complementary analysis can be found in the literature that followed [18,19,30,31,33,34]. Some of these works focus their analysis to the case of Brans-Dicke theory [30,33,34], while others investigations consider more general scalar-tensor theories. In most of the cases, the qualitative studies rely on choices of variables that make it difficult or even impossible to discuss the sector of the phase space where φ is negative, e.g., [33,34] (for a more detailed discussion of the use of the qualitative analysis of dynamical systems applied to scalar-tensor theories see [24,29] and references therein). Models without a Cosmological Potential We begin considering the case where the Brans-Dicke (BD) like scalar field φ is massless, i.e., the cosmological potential is absent. This will be contrasted with the case where there is a quadratic potential. By the same token it will enable us to assess whether there is any effect due to a variation of the coupling ω(φ) with regard to the issue of determining the sign of φ. For this case, the previous Equation (26) can be written for vacuum (M = 0) and a stiff fluid (γ = 2) as: (28) and for radiation (γ = 4/3) as: The plots represented in Figure 2 show the phase diagrams of this systems (X, W) for the case without potential, i.e., λ 0 = 0. This is again done resorting to a phase space compactification where the points at X, W → ±∞ are located at the boundaries x = arctanh X = ±1, y = arctanh W = ±1. One realizes that there are trajectories which cross the X = 0 dividing line in both directions, thus promoting the transition between a negative φ into a positive φ, and conversely (and hence of a swapping of the sign of G, as φ = G −1 ). Please note that as in the GR case previously considered, there is a mirror reflection between the top and lower half of the phase diagrams, arising from time reversal. We have used the following color scheme depending on the beginning and the end of the trajectory: In Figure 2, the top three phase diagrams represent vacuum and stiff fluid models, whereas the lower three correspond to radiation models. From left to right we have k = −1, 0 and k = +1 models, respectively. It is immediately apparent that in the vacuum models the number of trajectories that cross in one direction, say from left to right, is the same as that of those which cross in the opposite direction. Once again there is a mirror symmetry with time reversal between the top half and the lower one. Therefore, assuming that a measure of the probability of the model to have a positive gravitational constant, or otherwise, is proportional to the phase space area, we realise that both signs occur with the same probability. In this sense, the same behaviour occurs for the three cases of vacuum or stiff fluid. In addition, in the case of a positive curvature, i.e., Figure 2c the sign oscillates forever, while in the open models the trajectories evolve towards the Milne solution, x = y = 1, in the k = ±1 case, and are characterized by X = ± f 0 in the k = 0 models, which actually correspond to the solutions found in [35]. Behaviour of X and X for a scalar-tensor theory without potential. The upper three phase diagrams, (a-c), respectively, correspond to the k = −1, 0, +1 cases for vacuum and stiff fluid. The lower three phase diagrams, (d-f), respectively, correspond to the k = −1, 0, +1 cases for radiation. On the other hand, for radiation [26], there is a preference for the positive sign, or at least, to finish with a positive gravitational coupling, as represented by the blue and yellow areas. This is a reflection of the fact that the solutions are late time dominated by the matter component [36,37]. This is more apparent in the k = −1, 0 cases, but in the k = +1 cases now exhibits a subset of trajectories with oscillatory behavior, confined in the φ > 0 region. The impact of matter is dependent on the scalar-tensor coupling ω(φ). When 1/ √ 2ω + 3 → 0 the scalar-tensor theories approach GR, and this is implicit in Equation (27). We thus see that the dynamics of the vacuum FLRW models in ST gravity does not favour positive values of G with regard to the alternative possibility of a negative G. The dynamics is such that the upper half of the phase space corresponds to G > 0 and the other half to G < 0, and mirror reflections of one another. Thus both possibilities in what concerns the sign of G have the same probability. However, when matter is present, in the case of open models (including k = 0) there is higher probability for a positive value of G, following from of a larger proportion of solutions which evolve to become matter dominated. Models with a Cosmological Potential When we allow for a potential with a positive λ 0 , we see that a quite different picture emerges. Indeed, the phase diagrams corresponding to this case are represented in 3, and we see that now there is an equilibrium point at at x = 1, x = 1, corresponding to X = +∞, X = +∞, which attracts almost all trajectories of the phase plane, and this happens for all spatial curvature cases. This attractor at infinity corresponds to a de Sitter attractor, and thus to exponential behaviour of X in cosmic time. The only trajectories which do not end at this critical point are found in the closed k = +1 and open models k = −1, circling the center equilibrium point. This is illustrated by the vacuum and radiation which were envisaged in the massless ST models in the previous subsection. • Vacuum (M = 0) or stiff fluid γ = 2 with a cosmological potential Recalling that U(φ) = λ 0 φ 2 , we derive for these two cases: Please note that the case γ = 2 corresponding to stiff matter can be shown to be reducible to the vacuum case of a theory with a different coupling strength ω(φ) (see [27], and the companion paper [28] to the present work). Now, the fixed points {X, X } within a finite locus will be positioned at {0, 0} and {2k/λ 0 , 0}. To show the graphics of the phase diagrams, λ 0 has been taken equal to 4, 5 with the purpose to show both points sufficiently separated. • Radiation case γ = 4/3 with a cosmological potential In this case the system is: and the fixed points are: Therefore, for the cases k = ±1, at the fixed points we require Mλ 0 < 1. When this is not satisfied there are no fixed points, as illustrated in Figure 3d. For the case k = 0 there are no fixed points within the finite region of the phase plane. The qualitative behaviour depicted in Figure 3 reveals that with the exception of the oscillatory solutions, confined to a closed patch, all other solutions emerge from a collapsing deS solution at X = ∞, X = −∞ and end in the deS solution at X = ∞, X = +∞, thus revealing the domination of the cosmological potential term [14,30,[32][33][34]38,39]. More importantly, we see that the solutions are attracted to the positive G half plane. We thus conclude that the consideration of a cosmological potential 3 has the power to induce the dynamics of FLRW models to favour positive values of G instead of negative Gs. Thus this provides a cosmological mechanism to stabilize G in the positive sector. In addition, it happens that the deS asymptotic behaviour is accompanied with a relaxation towards GR [14][15][16]18,41] Observational Features ST gravity theories have to satisfy several observational bounds, namely, the so-called Parametrized Post-Newtonian (PPN) weak field, solar system tests, bounds stemming from the cosmic microwave background (CMB), from baryonic acoustic oscillations (BAO), and from primordial Big-Bang Nucleosynthesis (BBN), as well as bounds on the time variation of the gravitational "constant"Ġ/G [3,[42][43][44]. The local weak field bounds can be somewhat alleviated if some chameleon or Vainshtein mechanism applies, but it is difficult to evade the other bounds on wider scales. Yet, the vast majority of these bounds pertain to models where the scalar field has no cosmological potential (see though [49]). These bounds, therefore, imply that a primordial variation of the gravitational coupling must have been severely damped before the time of BBN such that a positive coupling, satisfying mild deviations from GR [47] is not only compatible with the observations on light elements abundances, but also solves the so-called 7 Li problem. In the suite, during the following radiation and matter epochs, the cosmological approach to GR is achieved, implying that G is positive. Summary and Conclusions In this work, we have investigated a cosmological mechanism that induces the value of the gravitational effective coupling "constant" to be positive. This is naturally done in the framework of scalar-tensor (ST) gravity theories, where this coupling varies, and which thus allow for the possibility of a negative coupling. We have considered the cosmological evolution of ST models both with and without the presence of a cosmological potential. We have resorted to a dynamical systems analysis which enable us to put in evidence the relevant qualitative features of the models. In the absence of the cosmological potential, the presence of matter or radiation favours a positive value of the gravitational "constant", when the evolution enters a phase of matter domination. This a mild effect and it is a consequence of Damour and Nordtvedt's relaxation mechanism towards GR [50]. However, it is when a quadratic cosmological potential, U(φ) = λ 0 φ 2 , is present that an attracting mechanism towards a positive value of the gravitational running "constant" becomes manifest. This is accompanied by an asymptotic de Sitter behaviour. By the same token, this system produces two additional effects: a de Sitter inflation and a relaxation towards general relativity. The latter effect allows, in particular, the fulfilment of the observational bounds on |Ġ/G|, when the potential is exactly quadratic in the Jordan frame. It effectively acts as a cosmological constant in the Einstein frame and the stabilization of the gravitational constant in the positive sector, may be seen as a by-product of the cosmic no-hair theorem. This mechanism of stabilization of the sign of G should take place early enough, in the primordial stages of the universe, consistently with the latest assessments of observational constraints on ST theories [44,51]
6,652.8
2019-03-18T00:00:00.000
[ "Physics" ]
Exploring memory synchronization and performance considerations for FPGA platform using the high-abstracted OpenCL framework: Benchmarks development and analysis A key benefit of the Open Computing Language (OpenCL) software framework is its capability to operate across diverse architectures. Field programmable gate arrays (FPGAs) are a high-speed computing architecture used for computation acceleration. This study investigates the impact of memory access time on overall performance in general FPGA computing environments through the creation of eight benchmarks within the OpenCL framework. The developed benchmarks capture a range of memory access behaviors, and they play a crucial role in assessing the performance of spinning and sleeping on FPGA-based architectures. The results obtained guide the formulation of new implementations and contribute to defining an abstraction of FPGAs. This abstraction is then utilized to create tailored implementations of primitives that are well-suited for this platform. While other research endeavors concentrate on creating benchmarks with the Compute Unified Device Architecture (CUDA) to scrutinize the memory systems across diverse GPU architectures and propose recommendations for future generations of GPU computation platforms, this study delves into the memory system analysis for the broader FPGA computing platform. It achieves this by employing the highly abstracted OpenCL framework, exploring various data workload characteristics, and experimentally delineating the appropriate implementation of primitives that can seamlessly integrate into a design tailored for the FPGA computing platform. Additionally, the results underscore the efficacy of employing a task-parallel model to mitigate the need for high-cost synchronization mechanisms in designs constructed on general FPGA computing platforms. Introduction A field programmable gate array (FPGA) is an integrated circuit that programmers can configure many times to achieve their goals [1,2].FPGAs include many low-level operations, such as shifts and additions.Usually, the Intel FPGA incorporates several resources, such as RAM blocks and DSPs, to perform various complex arithmetic functions and look-up tables (LUTs) [2].LUTs are also used to implement several functions (Fig 1).However, multiple LUTs can be combined to implement more complex functions.The DE5 (Stratix V) FPGA device is used in the present study.The adaptive logic module (ALM) resource allows a wide range of functions to be implemented efficiently.Each ALM contains several function units.The block diagram for the ALM is shown in Fig 1 [3].FPGA as a reconfigurable architecture provides better reconfiguration in which bit-level configuration is performed [4][5][6][7]. The flexible parallel hardware architecture is guaranteed by FPGA technology.It includes many logic components, such as adders, multipliers, and comparators.It also includes a lot of DSPs (Digital Signal Processors), LUTs (Look Up Tables), clocks, configurable I/O, memories, and wired connections between these components.Because these components operate concurrently, allowing for a large amount of computation to be done independently at once, we can achieve a high level of parallelization with this FPGA implementation [8][9][10].Many semiconductor companies, including Xilinx, Altera, Actel, Lattice, Quick Logic, and Atmel, produced and improved FPGA. Three types of FPGA-based spatially reconfigurable computing environments are now commercially available.They include commodity FPGA-based accelerator cards, stand-alone SOPC environments, and cloud-based spatially reconfigurable platforms.Commodity FPGAbased accelerator cards are the most common commercially available spatially reconfigurable computing environment and were chosen as the computing environment for this research.These cards are designed to be incorporated within a standard CPU-based computing system as an add-on low-profile PCIe-based daughter card.They incorporate one or more high-end FPGAs and significant amounts of multi-banked DDR SDRAM physical memory (8 GB to 32 GB), which are local to the card.The cards often also contain high speed network ports and flash memory that can be used to load default configurations within the FPGAs.Stand-alone SOPC configurations are also quite prevalent at the time of this writing.SOPC configurations also include high-end FPGAs that often contain built-in embedded CPU processing cores.SOPC configurations also contain varying amounts of DDR SDRAM physical memory and a host of I/O interfaces.SOPC configurations differ primarily from accelerator cards in that they are not designed to augment an existing CPU system [11]. FPGA platforms are now becoming available on the cloud [12].FPGA-based resources are accessible using the OpenStack virtual machine environment, which provides tools for cloud resource management [13,14].In a related study, a framework that integrated Xilinx FPGAs into the cloud based on OpenStack showed great efficiency and scalability upon hosting multiple processes and VMs [15].FPGAs are also accessible as an F1 compute instance on the Amazon Elastic Cloud, where each instance contains up to eight FPGAs [16].These instances could be used effectively to create a customized user's design in a wide range of commercial and scientific applications. Commodity FPGA-based technology has several issues, though, which must be carefully considered.One important issue is that while it is possible to create specialized functional units and data paths that closely mirror the structure of the application, the FPGA resources that are required are usually only a fraction of those required to implement the application in its most optimized form.Thus, the intelligent time sharing of these resources is mandatory and is the system-wide focus of what is a very complex optimization problem.The time it takes to configure an FPGA is large compared to the time taken to perform a base operation.The reconfiguration time for large FPGAs can be in the order of seconds, whereas the internal clock speed can be greater than 300 MHz.This means that internal FPGA resource trade-offs may have to be made that will decrease the utilization and increase time sharing to reduce the number of FPGA reconfigurations required.Another possibility is to utilize partial reconfigurability, which is supported by most modern FPGAs.Partially reconfigurable devices allow for the logic functionality of a subsection of its programmable resources to be reconfigured without interrupting the operation of the other portion of the reconfigurable logic.Unfortunately, this feature is often poorly utilized.Another major issue is the time it takes to synthesize a design.The fine-grain complexity of FPGAs can result in extremely long design compilation times, which can take hours or days to complete.This problem is most apparent when the FPGA-based resources needed by the application get close to the actual resources that are present on the system.It becomes imperative in such cases that the high-level design environment allow for the functionality of the design to be verified quickly before it goes through this lengthy process.Fortunately, high-level synthesis environments, such as the OpenCL support an emulator mode where emulation can be performed on the CPU.Still, this constraint precludes the use of just-in-time compilation techniques that are possible in GPU and some CPU applications.This means all modules that are to be executed on the FPGA must therefore be progenerated in an offline manner [11].To implement the proposed design on a general FPGA computation platform, one can employ languages such as Verilog, VHDL, or other supportive languages like System-C.However, opting for VHDL or Verilog for the suggested implementation entails writing numerous lines of code, even for relatively simple tasks.In contrast, accomplishing a similar task using OpenCL requires only a few lines of code.Programmers who target Hardware Description Languages (HDLs) must possess substantial experience with the underlying hardware, whereas OpenCL abstracts away these hardware details. OpenCL is an open computing language used to harness the benefits of multi-processing elements.The wide variety of platforms that can be used for OpenCL makes it an attractive choice for heterogeneous systems in which several computations can be distributed among different computation architecture elements [10].The OpenCL code written to run on the FPGA is implemented as a kernel, and the kernel code is compiled using the Intel offline compiler (IOC).The kernel could be executed with one or multiple items (threads) [11]; the choice depends on the code characteristics, and the goal is to achieve the highest degree of parallelism.The OpenCL standard naturally enables the ability to specify parallel algorithms to be implemented on FPGAs, at a far higher degree of abstraction than hardware description languages (HDLs) like VHDL or Verilog, in addition to providing a portable model. Because FPGAs are not just processors using a typical software design flow, targeting FPGAs from OpenCL presents some special challenges.The FPGA architecture differs significantly from the typical platforms (such as CPUs and GPUs) that OpenCL implementations target.For instance, FPGA makers recently debuted programmable system-on-chips (SoCs), in which a SoC is connected to FPGA fabric to create a customizable platform for an embedded system environment, like the Zynq platform [17].Additionally, there is plenty of room for OpenCL to adjust to this kind of platform due to the long compilation time, the programmable nature of FPGAs, and the capability for partial reconfiguration [18]. The notion of pipeline parallelism is an important concept of the IOC, which synthesizes the high-level abstracted OpenCL code on the target FPGA device.Pipelined architectures allow data to pass through various stages before the proposed result is attained.The IOC creates a customized pipeline architecture based on the proposed kernel code [19].Figs 2 and 3 illustrate how the IOC creates the pipeline architecture for a given kernel code.Several optimization techniques, such as shift-register, data flow, loop merging and loop unrolling can feasibly be used to create a powerful design architecture as shown in Figs 2 and 3. Loop unrolling allows more operations to be performed per clock cycle by duplicating the necessary function units.Meanwhile, the shift-register technique helps reduce the dependency between consecutive statements, thereby reducing the number of stall cycles.By incorporating data flow and loop merging, the proposed design gains more capability to overlap instruction execution.The Intel FPGA compiler provides several tools to modify the design's performance and solve possible critical issues that may reduce the effectiveness of the architecture before synthesizing the proposed FPGA device [20]. The code written in OpenCL to perform the acceleration process on the Intel FPGA architecture has two parts.The first part is the host code, which is written in standard C/C++ code and compiled using the gcc/g++ compiler [8].The host code is responsible for starting the acceleration process, deciding what data should be transferred between the host and the FPGA global memory, and deciding which parts of the code should be accelerated on the FPGA device.The overall host code is a sequence of steps taken before and after the kernel code is launched.The second part is the kernel code, which is implemented using the OpenCL APIs and compiled with the IOC to generate the device executable code [10].Whenever the FPGA undergoes reprogramming, the newly constructed design replaces the old one.Employing the FPGA as the primary processing platform facilitates easy control over the number of concurrent tasks and the data to be stored in memory.In the envisioned design, all operations are executed exclusively on the FPGA development board.This encompasses accessing the FPGA memory system, executing the proposed benchmark, and ensuring that only one algorithm runs at any given time. The Intel offline compiler, functioning as a high-level thesis compiler, produces multiple files during the compilation process.Notably, VHDL and Verilog files are generated among these, and they play a crucial role in constructing various functional units in the final proposed design.Despite this, the use of VHDL or Verilog is more resource efficient.The additional resources demanded by OpenCL primarily stem from the necessity to incorporate OpenCLcompliant support logic on the FPGA, contributing to a resource-area overhead. This work is based on the study titled "Efficient Synchronization primitives for GPU" [Stuart and Owens 2011] [21], in which a set of eight benchmarks was developed using a CUDA software framework to study the effect of memory access time on overall performance.The present study intends to replicate this work and study the effect of several synchronization functions on the overall performance when targeting the general FPGA computing platform.Several synchronization techniques can be used when multiple threads cooperate to perform related tasks and have access to commonly shared variables.The barrier is a common synchronization technique that allows all synchronized threads to stop at a certain point; then, once the last thread reaches this point, all threads resume their execution [22].Mutex is another synchronization mechanism that allows only one thread to execute in the critical section to avoid the race condition issue.A binary semaphore is similar to a mutex mechanism when a single resource exists [23].Meanwhile, the counting semaphore controls access to shared resources when there are multiple instances of a resource, and each instance cannot be used by more than one thread concurrently [24]. Although these synchronization techniques must be used to control access to shared resources, they introduce a significant time overhead due to the overall waiting time required for each thread to access the critical resource [25].The present research recommends a technique that could have less overhead than other techniques when implemented on the FPGA platform.Several benchmarks are developed to analyze the memory access time of various implementations, thereby achieving the study's purpose.These benchmarks are classified as atomic or non-atomic in the first layer and as having high or low contention in the second layer.Finally, for read or write memory operations, we define atomic access as the access of only one thread to a distinct memory location, such that atomic accesses to the same memory location must be serialized.High contention means that all threads will access the same memory location.Meanwhile, in low contention instances, we generate multiple memory locations separated by at least 64 bytes, and there is a minimal chance that two or more threads will access the same memory location [21].Lastly, in this design, we assume that each workgroup has 128 work items.Only one thread (the master thread) is given access to memory; for simplicity, thread zero is the master thread. All benchmarks are compiled using the GCC V4.4 and the Intel FPGA compiler V13.1, which are compatible with the Linux Centos operating system.The target FPGA used is the DE5 Stratix V device (5SGXEA7N2F45C2).This board contains enough resources, including 234K ALMs, more than 250 DSP blocks, and 2.6K RAM blocks to synthesize the user's code in various heavy computation applications.The host CPU and the target board are connected via a (PCIe) connection, which enables extremely rapid data transfers between the processing units.The benchmarks created within the abstracted OpenCL framework are compatible with a wide range of FPGA types.For Xilinx FPGAs, the SDAccel framework, analogous to the Intel OpenCL framework tool, can be employed to execute various benchmarks on the designated Xilinx board. Available resources vs. Throughput trade-offs Each FPGA contains a countable number of specific resources, such as ALMs, memory blocks, and DSPs.The proposed FPGA is usually connected to the host machine using the PCIe interface [6].Each kernel is translated into a proposed hardware circuit using a fixed amount of resources.Typically, all kernels are combined into a single.cl(device code) file.While it takes only microseconds to milliseconds to run the kernel on the proposed FPGA (depending on the synthesized design), the overhead associated with switching the kernel during runtime is extremely large.The experiment was run 100 times to determine that it took approximately 1.612 seconds to configure the device at runtime.This outcome indicates that the configuration time is significant in most cases. Another factor to consider is the additional resource consumption associated with using a high-level abstract OpenCL programming tool.Experiments demonstrate that approximately 16% of the ALMs, 11% of memory blocks, 3% of the total memory bits, and 53,893 registers are consumed to implement a blank (empty code) kernel.The extra resource overhead shown in Table 1 can be avoided by combining multiple kernels into a single file.Table 1 summarizes multiple kernels of vector addition, where the kernel is duplicated up to five times in a single file.Column 2 shows the resource usage of the blank kernel; Column 3 shows the resource usage of a single vector addition kernel; and Columns 4-7 show the resource usage of two, three, four, and five vector addition kernels.The experiment demonstrates the overhead associated with using a high-level abstracted OpenCL tool. Loop unrolling can enhance performance by running several loop iterations in each clock cycle.However, the duplication function unit required to implement the loop unrolling technique consumes more resources.As such, the loop unrolling factor depends mainly on the number of resources available.Because of hardware limitations, we cannot fully unroll the loop in this work.Therefore, the loops in all benchmarks are unrolled 256 times.The same is true for different mutex implementations; all implementations are unrolled 10 times. Proposed method and the developed benchmarks A set of eight benchmarks is created and compiled with the IOC to test the performance of FPGA memory systems.The benchmarks are classified as atomic or non-atomic, contentious or non-contentious, and read or write.For an atomic memory access operation, only one thread can access the desired memory location at a time, and no other thread can access the same memory location concurrently.In cases of contentious access, all threads access the same memory location, whereas in non-contentious access cases, different threads access different memory locations.Threads are divided into work groups, each of which contains 128 threads.However, only one thread in each workgroup (the first and master thread) can access the memory.Each master thread performs 1024 memory access operations, which can be read or written.The "atomicadd" operation is used to implement the atomic read, and "atomicexchange" is used to implement the write operation.All benchmark loops are unrolled 256 times; this unrolling factor number is based on the available resources on the target FPGA to synthesize the proposed design.Certainly, these benchmarks serve as representative indicators and encompass various memory access behaviors.The primary concern addressed here is memory access time, particularly crucial when targeting parallel and high-speed computation platforms like GPUs, multicores, and FPGAs.Numerous researchers have delved into discussions regarding memory access time and synchronization issues in the context of GPUs and multi-core computation platforms.This study is introduced to specifically investigate the FPGA memory system and its impact on overall FPGA performance in relation to memory access time. Table 2 shows that atomic operations need more time to execute.Reducing the number of atomic operations will enhance the performance significantly.The effects of contention are not marked.The computing unit is saturated by running eight workgroups, each comprising 128 threads.Only the master thread can access the desired memory location.Certainly, this study considers various workloads and data sizes.The performance of these algorithms is evaluated through a set of experiments involving different numbers of thread blocks and varying numbers of memory operations. The average execution times of various memory read/write operations are shown in Fig 5, and these are normalized to the execution time of a contention volatile memory operation.Fig 5 also shows the effects of atomic operations on memory access operations, which may increase the memory access time by more than eight times.However, the task-parallel model is more commonly used to construct the proposed design on the FPGA platform.The Intel FPGA compiler has the capability to create an effective pipeline design where data can be shared among multiple loop iterations; this reduces the overall dependencies and the high cost of using several synchronization mechanisms. MUTEX implementation and results discussion After studying memory access benchmarks, several possible implementations of mutex are developed and tested on the Intel FPGA architecture.All proposed algorithms perform atomic memory access, and only the master thread has access to memory.These suggested implementations [21] are described below. A. Spinning: in this implementation, the target thread is in the waiting state until the status of the proposed memory location is changed.Two operations are considered here: • Lock function: The memory location is continually accessed using atomicexchange, which always returns the old value of the lock.If the returned value of the lock is 0, the thread can access the critical section; otherwise, it will continuously perform atomicexchange until it is granted access to the critical section.• Unlock function: The critical section is released by assigning a lock value of 0 using atomicexchange.This method is easy to implement; however, threads are not necessary to access critical sections in the same order in which they arrive (not FCFS). B. Backoff: the target thread continues doing non-useful work before getting access to the resource, two operations are carried out: • Lock function: The thread tries to gain access to the critical section if it is free.Otherwise, the thread sleeps for a certain time based on the thread group ID.This time increases after each trial until it reaches the maximum value, which is determined during the compilation time.This value is assigned to the minimum value if the incremented value is greater than the maximum value.This process is repeated until the thread accesses the critical section. • Unlock function: The unlock function assigns a lock value of 0 (nonatomic operation). C. Fetch and add using Backoff: A well-known instruction supported by many processors to introduce an effective mutex implementation.The Backoff is employed here to let the thread wait if the resource is not available, two operations are implemented here: • Lock function: Each thread that should gain access to the critical section takes a ticket (the first variable), which is a number based on the thread's arrival order.The thread can access the critical section only if the value of the ticket is equal to the value of the turn (the second variable).If the ticket value is not equal to the turn value, the thread uses the Backoff algorithm to sleep for a certain period of time. D. Fetch and add using sleeping: Same as in "Fetch and add using Backoff", but with the sleeping technique is used instead of backoff to implement the thread waiting. • -Lock function: This function is the same as that described in the Fetch and add using the Backoff algorithm, but if the ticket value is not equal to the turn value, the thread continuously polls the variables' memory locations to check if the equality condition is satisfied. Several experiments with varying numbers of thread blocks are carried out to compare the performance of these algorithms.The performance of each algorithm is evaluated by measuring the number of memory operations completed per second.Table 3 shows the experimental results, which demonstrate that the highest throughput is achieved using the spinning implementation of mutex, as shown also in Fig 6 .Values represent millions of memory operations per second on the Intel DE5 FPGA device.In this case, the target platform is a general Intel FPGA device.The preferred implementation is that which uses the fewest hardware resources.Table 4 shows some common resources used for each algorithm.The proposed synthesized architecture of the spinning algorithm consumes fewer resources than other algorithms.For all algorithms, each loop iteration contains 100 memory operations, and each operation has lock and unlock functions. Applying synchronization methods like the Adaptive Distributed Consensus Control of One-sided Lipschitz Nonlinear Multiagent [26] and the Delay-range-dependent Chaos Synchronization approach, which considers varying time-lags and delayed nonlinear coupling [27], can be employed to investigate and analyze memory synchronization across various computation platforms.This necessitates the regeneration and development of new benchmarks.These novel approaches may be embraced in future studies, allowing for a comparison with the presented results. Conclusion Several memory-access-based benchmarks are developed to study the effect of common synchronization techniques on the overall performance of the proposed synthesized design constructed on the Intel FPGA platform.These benchmarks are developed using the abstracted high-level OpenCL programming tool.The results demonstrate that using atomic operations in the synthesized design leads to significant reductions in performance.Therefore, the task-parallel model, which improves the efficiency of the created design by generating an effective pipeline architecture, is a favorable choice when extra atomic operations are used.The present study also investigates several implementations of the widely used mutex synchronization mechanism and determines which implementation could be adopted by the proposed design to maximize the number of memory operations performed per second. Fig 2 . Fig 2. Examples of optimization techniques can be applied to the proposed design using the FPGA software development tool.https://doi.org/10.1371/journal.pone.0301720.g002 Fig 3 . Fig 3. Illustration of the pipeline architecture created by the IOC for a given kernel code written in OpenCL.https://doi.org/10.1371/journal.pone.0301720.g003 Fig 4 below summaries the developed benchmarks. Table 3 . The number of operations completed per second (x 10 6 ). Spinlock is the preferred implementation, and the Fetch and Add using the Backoff algorithm has the lowest throughput. https://doi.org/10.1371/journal.pone.0301720.t003
5,548.2
2024-05-13T00:00:00.000
[ "Computer Science", "Engineering" ]
Quantifying the effect of thermal heat radiation emitted by the walls of a climatic chamber on temperature measurements . The study consists in quantifying the effect of the thermal radiation of the climatic chamber walls on air temperature measurements for contact thermometry. Air temperature measurements are affected by surface interactions with the environment, such as those of the thermometer and walls (surface condition, emissivity and air velocity). The walls of the enclosure are generally made of stainless steel, a potentially radiating material. To characterize the effect of the walls, we have varied environmental conditions such as the emissivity of the walls of the chamber and the sensors, the surface of the sensors, temperature and illumination. These different configurations allow us to deduce their impacts on the temperature measurements. To quantify this effect we simulated different configurations to isolate the radiation effect. Two surface states are tested: low emissivity metal surface and painted surface with matte black paint of high emissivity. This study highlights the effect of the walls on the air temperature measurements in the center of the climatic chamber. The experimental results were also subject to a theoretical verification using the equation of the standard ISO 7726 [1]. The quantification of the effect of radiation from the walls of the climate chamber on temperature measurements becomes significant from 100 °C. Quantification of thermal radiation is 0.4 °C at 100 °C and 0.8 °C at 150 °C. General context Temperature is an important measure for climate chamber testing that is widespread in the industry (example: FDX 15 140). This is a parameter identified as an influence parameter on the quality of climate testing. It is therefore essential to know its value. Temperature depends on three heat transfers: conduction, convection and radiation. This last transfer of heat causes many quantification difficulties. Numerous studies have been conducted to assess or reduce the effect of radiation on temperature measurements, but no quantification has been proposed. Context at CETIAT CETIAT's temperature Measurement Laboratory is an accredited laboratory, by the French Accreditation Committee (COFRAC). The CETIAT is able to perform temperature calibrations ranging from -90 °C to 1050 °C [2]. Calibration for this study was carried out at the Cetiat. DESCRIPTION OF THE STUDY As part of the quantification of the effect of the climatic chamber wall's thermal radiation on temperature measurements, we will address five main points: (1) Sensors choice, (2) Test volume, (3) Sensors calibration, (4) Different emissivity conditions of climatic chamber and sensors environment, (5) Sensors location. Two steps are carried out at CETIAT, one where the probes are placed inside the tubes and another where the sensors' sheaths are painted or not with high emissivity paint. Choice of sensors RTDs (platinum resistance sensors 100 ohm) are used for measurements in the center of the volume. We use T-type thermocouples to measure the temperature of the walls, an infrared camera to control the homogeneity of surfaces (Fig. 9) and a hot wire anemometer to measure air velocity. The acquisition is being carried out on an HP34972 power plant. Test volume: climatic chamber The temperature measurements were performed in climatic chambers. A climatic chamber is used to generate controlled thermal conditions: temperature [-70; 160] °C; relative humidity [10; 95] % RH. Fig. 2. Climatic chamber To quantify the effect of the radiation of the walls, temperature measurements are made in the center of the enclosure. Other measurements will also be made at the walls for calculus. The choice of sensors requires special attention in order to adapt the uncertainty of the measurement. To measure the temperature in the center of the climatic chamber, platinum resistance thermometers, which have a low uncertainty and provide a reliable result, were chosen. To measure the walls temperature, thermocouples are used. These sensors are not bulky and allow direct measurement on the wall. Calibration of the sensors The temperature sensors were previously calibrated in an overflow thermostatic bath, compared to a Standard Platinum Resistance Thermometer (TRPE). Calibration is performed at the four desired temperature levels: 25 °C, 50 °C, 100 °C and 150 °C. The expanded uncertainty of the calibration of the measurement chain is 0.06 °C. Variation of different physical quantities Emissivity is the ability of a surface to absorb radiated energy. To quantify the effect of wall radiation on temperature measurements, we use removable walls. One side, in rock wool covered with matte black paint of strong emissivity (0.95), the other side, a reflective aluminium foil (0.04) [3]. Figure 4 shows an example of the surface condition of removable walls. By reversing these faces, we can assess their impact on temperature measurements with the same heat flow conditions. Fig. 4. Removable walls of different emissivity (aluminium and rock wool painted in black) The emissivity of the sensors used to measure the air temperature in the center of the chamber has also been modified. Some sensors sheaths are in stainless steel, others painted with matte black. Figures 5 and 6 show two different sizes of sensors. We have two different emissivity sheaths sensors but same geometry. The radiation emitted or absorbed by the sensor also depends on its area of exchange. Different sensor diameters are used to assess the impact on temperature measurement: 2 mm and 6 mm are chosen. Initial tests are carried out with copper tubes 22 mm in diameter. These tubes are painted with strong emissivity's paint inside and /or outside. The sensors are positioned in the center of the tubes. These results are also presented, but the convective effect also occurs. Figure 7 shows the installation in the climatic chamber. RESULTS AND DISCUSSION After carrying out the measurements at four temperatures (25 °C, 50 °C, 100 °C and 150 °C) with the two different emissivity walls (matte black and aluminium), we proceed to the counting. Several results allow us to observe phenomena related to the radiation of the walls. These results will be compared to those obtained during the previous study with the tubes. Effect of emissivity The temperature measurements obtained with the different emissivity walls show variations. According to figure 11, we observe the temperature differences between the aluminium and matte black walls. Also, between the painted and unpainted probes. At 25 °C and 50 °C, the effect of the walls is about 0.01 °C which is negligible compared to the quality of the sensors. At 100 °C and 150 °C, the effect of radiation begins to have an impact on the measurements. Table 1 provides the values obtained. Temperature differences between walls with stainless steel and probes are 0.4 °C and 0.75 °C per 100 °C and 150 °C respectively. These results confirm the effect of wall radiation on temperature measurements, obtain in first study. Wall / al / black Thermocouples T RTD 6 mm black painted RTD 6 mm stainless steel RTD 2 mm black painted RTD 2 mm stainless steel Table 1. Difference between black and stainless steel sheaths (2019) The differences are similar according to the two studies carried out. Although the capsules surrounding the sensor have a larger exchange surface, they reduce the effect of convection on the temperature measurement. This phenomenon may explain the difference between the results. Table 2. Difference between black and copper tubes (2018) The results obtained in 2018 with the tubes show much larger discrepancies. See graph below. The implementation of the tubes around the sensors is not appropriate, since the measured temperature corresponds to both convection and radiation heat transfer. In this study, the sensors' sheaths have been directly painted to have different emissivities but identical surfaces. Effect of sensor diameter Temperature differences due to the different diameters of the sensors are weak. The difference is about 0.05 °C. The ratio of the exchange surface between the walls of the enclosure and the sensors is too low. In conclusion there is no significant effect of diameter on results compared to the uncertainty. We noticed that the temperatures measured by the emissivity sensors are lower. We could have expected an opposite phenomenon. Calculations by form factor The objective is to calculate the effect of radiation. We used the method of form calculation. This simplified calculation determines the average temperature of the radiation from the temperature of the surrounding surfaces and the shape factors. The form factor measures the fraction of the radiated flow from an isothermic surface received by another surface in a nonparticipatory environment. To calculate the average radiation temperature, we used the following equation [1]: Where Tr ̅̅̅̅ is the average radiation temperature, in Kelvin; T j is the surface temperature of the surface, in Kelvin; Fhj is the form factor between a sheath and the surface. We perform two calculations: with the matte and aluminium wall temperatures. The following table summarizes the temperatures of the matte and aluminum walls of the enclosure, measured at 100 °C. With the matte and aluminium wall temperatures, we obtain the two respective results 1.5 °C and 1.6 °C for 100 °C. The difference between the two values can be the effect of the radiation of the walls, quantified at 0.1 °C. The experiments give higher values. These results will need to be confirmed by further tests and more comprehensive calculations. Conclusion This study is carried out with platinum-resistant thermometers placed at the center of a climate chamber. Measurements with different configurations were made (painting, diameter). The results confirm an effect of radiation on these temperature measurements. The effect of the radiation of the walls becomes significant around 100 °C. These results are almost confirmed by calculations. This effect is 0.4 and 0.75 respectively at 100 and 150 degrees Celsius. However these initial results have yet to be confirmed by others tests.
2,251.2
2019-01-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Research on Dynamic Path Planning of Mobile Robot Based on Improved DDPG Algorithm . Aiming at the problems of low success rate and slow learning speed of the DDPG algorithm in path planning of a mobile robot in a dynamic environment, an improved DDPG algorithm is designed. In this article, the RAdam algorithm is used to replace the neural network optimizer in DDPG, combined with the curiosity algorithm to improve the success rate and convergence speed. Based on the improved algorithm, priority experience replay is added, and transfer learning is introduced to improve the training effect. Through the ROS robot operating system and Gazebo simulation software, a dynamic simulation environment is established, and the improved DDPG algorithm and DDPG algorithm are compared. For the dynamic path planning task of the mobile robot, the simulation results show that the convergence speed of the improved DDPG algorithm is increased by 21%, and the success rate is increased to 90% compared with the original DDPG algorithm. It has a good effect on dynamic path planning for mobile robots with continuous action space. Introduction Path planning is a very important part of the autonomous navigation of robots.e robot path planning problem can be described as finding an optimal path from the current point to the specified target point in the robot working environment according to one or more optimization objectives under the condition that the robot's position is known [1,2].At present, the commonly used algorithms include the artificial potential field method [3], genetic algorithm [4], fuzzy logic method [5], and reinforcement learning method [6].In recent years, many scholars have proposed path planning methods in a dynamic environment.In 2018, Qian [7] proposed an improved artificial potential field method based on connectivity analysis for the path planning problem of dynamic targets in the LBS System.In 2019, in order to solve the long-distance path planning problem of outdoor robots, Huang [8] proposed an improved D * algorithm, which is combined with Gaode mapping based on a vector model.In 2020, Nair and Supriya [9] applied neural network algorithm LSTM to path planning in a dynamic environment.e reinforcement learning (RL) algorithm is a learning algorithm that does not require agents to know the environment in advance.e mobile robot takes corresponding actions while perceiving the current environment.According to the current state and the actions taken, the mobile robot migrates from the current state to the next state.e Q-learning algorithm [10] is a classical reinforcement learning algorithm that is simple and convergent and has been widely used.However, when the environment is complex, with the increase of the dimension of state space, the reinforcement learning algorithm is prone to fall into "dimension explosion."Deep learning (DL) has a good ability to deal with high-dimensional information.Deep reinforcement learning (DRL), which combines DL with reinforcement learning [11,12], can not only deal with high-dimensional environmental information but also carry out corresponding planning tasks by learning an end-to-end model.erefore, the DQN algorithm [13] comes into being.It usually solves the problem of discrete and low-dimensional action space.e Deep Deterministic Policy Gradient (DDPG) algorithm proposed by DeepMind team in 2016 uses the actorcritical algorithm framework and draws lessons from the idea of the DQN algorithm to solve the problem of continuous action space [14].However, when the DDPG algorithm is applied to path planning in a dynamic environment, it has some shortcomings, such as low success rate and slow convergence speed, and most of the related research stays at the theoretical level, lacking solutions to practical problems. In this article, a new DDPG algorithm is proposed in which the RAdam algorithm is used to replace the neural network algorithm in the original algorithm combined with the curiosity algorithm to improve the success rate and convergence speed and introduce priority experience replay and transfer learning.e original data is obtained through the lidar carried by the mobile robot, the dynamic obstacle information is obtained, and the improved algorithm is applied to the path planning of the mobile robot in the dynamic environment so that it can move safely from the starting point to the end point in a short time, get the shortest path, and verify the effectiveness of the improved algorithm. e organizational structure of this article is as follows: the first section is an introduction, the second section introduces the DDPG algorithm principle and network parameter setting, the third section is path planning design of improved DDPG algorithm, the fourth section shows the simulation experiment and analyzed results, and the summaries are given in the last section. DDPG Algorithm Principle and Network Parameter Setting 1.1.1.Principle of DDPG Algorithm.DDPG used in this article is a strategy learning method that outputs continuous actions.Based on the DPG algorithm and using the advantages of Actor-Critic's strategy gradient single-step update and DQN's experience replay and target network technology for reference, the convergence of Actor-Critic is improved.e DDPG algorithm consists of policy network and Q network.DDPG uses deterministic policy to select action a t � μ(s t |θ μ ), so the output is not the probability of behavior but the specific behavior, where θ μ is the parameter of policy network, a t is the action, and s t is the state.e DDPG algorithm framework is shown in Figure 1. Actor uses policy gradient to learn strategies and select robot actions in the current given environment.In contrast, Critic uses policy evaluation to evaluate the value function and generate signals to evaluate Actor's actions.During path planning, the environmental data obtained by the robot sensor is input into the Actor network, and the actions that the robot needs to make are output. e Critic network inputs the environmental state of the robot and the path planning actions and outputs the corresponding Q value for evaluation.In the DDPG algorithm, both Actor and Critic are represented by DNN (Deep Neural Network).Actor network and Critic network approximate θ μ , μ, and Q functions, respectively.When the algorithm performs iterative updating, firstly, the sample data of the experience pool is accumulated until the number specified by the minimum batch is reached, then the Critic network is updated by using the sample data, parameter θ Q of the Q network is updated by the loss function, and the gradient of the objective function relative to the action θ μ is obtained [15].en, update θ μ with Adam Optimizer.e robot in this article obtains the distance between itself and the surrounding obstacles through lidar.e detection distance range of lidar is (0.12, 3.5) (unit m), and the angle range of lidar [16] detection is (−90, 90), that is, 0 °in front of the robot, 90 °to the left, and 90 °to the right.e lidar data are 20 dimensions, and the angle between radar data in each dimension is 9 °. e basis for judging whether the robot hits an obstacle in the process of moving: if the distance from the obstacle is less than 0.2 m, it is judged as hitting the obstacle.In an actual simulation, 20-dimensional lidar distance information is obtained. DDPG Network Parameter Setting According to the distance between the robot and the obstacle, the state between the robot and the obstacle is divided into navigation state N and obstacle collision state C as follows: where d i (t) is the i-th dimension lidar distance data of the robot at time t.When the distance between the robot and the obstacle is d i (t) ≤ 0.2m, the robot is in state C of hitting the obstacle.When the distance between the robot and the obstacle d i (t) > 0.2m, the robot is in the normal navigation state N [17,18]. Action Space Setting. e final output of DDPG's decision network is a continuous angular velocity value in a certain interval.e output is the continuous angular velocity, which is more in line with the kinematic characteristics of the robot, so the trajectory of the robot in the process of moving will be smoother, and the output action will be more continuous.In the simulation, it is necessary to limit the angular velocity not to be too large, so the maximum angular velocity is set to 0.5 rad/s.Hence, the final output angular velocity interval of DDPG is (−0.5, 0.5) (unit: rad/s), the linear velocity value is 0.25 m/s, the forward speed (linear velocity (v), angular velocity (ω)) is (0.25, 0), the left turn speed is (0.25, −0.5), and the right turn speed is (0.25, 0.5). Reward Function Settings. 2 Mobile Information Systems In the above formula, reward is the return value.d i−0 (t) is the distance between the robot and the obstacle.In the experimental simulation, when d i−0 (t) is less than 0.2, the return value of collision with the obstacle is −200.d i−t (t) is the distance value between the robot and the target point, and 100 is rewarded when reaching the target point.In other cases, the difference between the distance from the target point at the previous moment and the distance from the target point at the current moment, that is, 300 , is taken as the return value.e design is to make the robot move to the target point continuously so that every action taken by the robot can get feedback in time, ensuring the continuity of the reward function and speeding up the convergence speed of the algorithm.In some neural network optimizer algorithms, SGD converges well, but it takes a lot of time.In contrast, Adam converges quickly, but it is easy to fall into local solutions.RAdam uses the warm-up method to solve the problem that Adam can easily converge to the local optimal solution and selects the relatively stable SGD + momentum for training in the early stage to reduce the variance stably.erefore, RAdam is superior to other neural network optimizers.In addition, the RAdam algorithm [19] is an algorithm proposed in recent years, which has the characteristics of fast convergence and high precision, and the RAdam algorithm can effectively solve the differences in adaptive learning methods.erefore, the RAdam algorithm is introduced into the DDPG algorithm to solve the problems of low success rate and slow convergence speed of mobile robot path planning in the dynamic environment caused by neural network variance problem [20].e RAdam algorithm formula can be expressed as follows: Path Planning Design of Mobile Information Systems where θ t is the parameters to be trained, t is the training time, α t is the step size, r t is the rectification term, v t is the moving second-order moment after bias correction, m t is the moving average after bias correction, attenuation rate { β 1 , β 2 }, { β t 1 , β t 2 } is the attenuation rate at time t, m t is the first-order moment (momentum), v t is the second-order moment (adaptive learning rate), g t is the gradient, ρ ∞ is the maximum length of the simple moving average, ρ t is the maximum value of the simple moving average, J(θ) is the target parameter, and ∇ θ is the gradient coefficient. Prioritized Experience Replay. In the path planning of mobile robot in a dynamic environment because of the uncertainty of the environment, there are a lot of invalid experiences due to collision in the early stage of training.e original DDPG algorithm uses these invalid experiences for training, which leads to a low success rate of path planning after training and wastes a lot of time.In order to solve the problem that the success rate of mobile robot path planning in a dynamic environment is not high due to ineffective experience, this article designs and adds prioritized experience replay.When prioritized experience replay extracts experiences, priority is given to extracting the most valuable experiences, but not only the most valuable experiences; otherwise, overfitting will be caused.e higher the value, the greater the probability of extraction.When the value is lowest, there is also a certain probability of extraction. Prioritized experience replay uses the size of TD (Temporal Difference) error to measure which experiences have greater contributions to the learning process.In the DDPG algorithm, its core update formula is where TD-error is where max a Q w (s t+1 , a t+1 ) is the action a t+1 selected from the action space when the mobile robot is in the state s t+1 , so that Q w (s t+1 , a t+1 ) is the maximum value of Q values corresponding to all actions, and t is the training time.As a discount factor c, make it take the value between (0, 1), so that the mobile robot does not pay too much attention to the reward value brought by each action in the future, nor does it become short-sighted, but only pays attention to the immediate action return.r t+1 is the return value obtained by the mobile robot executing the action a t and transitioning from states s t to s t+1 .e goal of priority experience replay is to make TD-error as small as possible.If TD-error is relatively large, it means that our current Q function is still far from the target Q function and should be updated more.erefore, the TD-error is used to measure the value of experience.Finally, the binary tree method is used to extract the experiences with their respective priorities efficiently. Curiosity Algorithm. e core of interaction between the deep reinforcement learning algorithm and environment is the setting of the reward mechanism.A reasonable reward mechanism can speed up the learning process of the agent and achieve good results.However, in the path planning of mobile robots in a dynamic environment, as the working environment of mobile robots becomes more and more complex, external reward training alone cannot get good results quickly.erefore, the curiosity algorithm [21] is introduced in this document to provide internal rewards by reducing the form of actions and self-errors in the learning process of agents through internal curiosity module (ICM), so that mobile robots can train under the combined action of internal and external rewards and achieve good path planning effect.e final reward value combined with the DDPG algorithm is max r t � r i t + r ε t , where r t is the total reward value, r i t is the internal reward of curiosity module, and r ε t is the external reward of the DDPG algorithm.In a complete training process, the original and next state values and actions should be calculated through the internal curiosity module as shown in Figure 2. Specifically, the curiosity algorithm uses two submodules: the first submodule encodes s t into φ(s t ), and the second submodule uses two consecutive states, φ(s t ) and φ(s t+1 ), encoded by the previous module to predict action a t .at is, the action of the agent will pass through the forward model a t � g(s t , s t+1 ; θ I ), where a t is the predicted estimation value of the action, s t and s t+1 represent the original state and the next state of the agent, θ I is the neural network parameter, and the function g is the inverse dynamic model.Error calculation is carried out between the state prediction of the forward model and the coding of the next state, and the calculation result obtains an internal reward.e coding principle is as follows: where φ(s t+1 ) represents a state prediction value, φ(s t ) represents a feature vector encoded by the original state s t , a t is the action, θ F is a neural network parameter, and the learning function f is called a forward dynamics model.e neural network parameter θ F is optimized by minimizing the loss function L F : e intrinsic reward value is where η is the scale factor, satisfyingη > 0. e coding results of the original state and the next state will be predicted by the inverse dynamic model.e overall optimization objectives of the curiosity algorithm are summarized as follows: In the formula, β and λ are scalars, θ I and θ P is the neural network parameter, the losses of the inverse model and the forward model are weighted to β satisfy 0 ≤ β ≤ 1, the importance of gradient loss to the reward signal in learning is measured λ, λ > 0 is satisfied, L I is the loss function to measure the difference between the prediction and the actual action, r t is the internal reward value at time t, and π(s t ; θ P ) represents a parameterized policy.In the simulation experiment, β is 0.2 and λ is 0.1. Establishment of Simulation Experiment Environment. Hardware configuration of simulation experiment: Intel i5-3320M CPU and 4G memory. e operating system is Ubuntu 18.04.e ROS Melodic robot operating system is installed, and the simulation environment is established using Gazebo 9 under ROS.e generated experimental environment is shown in Figure 3. In that simulation environment of Figure 3(a), a square environment with a length and width of 8 meters is established, no obstacles are added, the position of the mobile robot at the starting point is set to (−2, 2.5), and the color circle at the target point is set to (2, −2), which is mainly used to train the mobile robot's ability to complete the target in a limited space for transfer learning.In that simulation environment of Figure 3(b), eight dynamic obstacles are added on the basis of the above environment, among which the middle four (0.3 × 0.3 × 0.3) m 3 obstacles rotate counterclockwise at a speed of 0.5 m/s, the upper and lower two (1 × 1 × 1) m 3 obstacles move horizontally at a speed of 0.3 m/s, and the middle two (1 × 1 × 1) m 3 obstacles move vertically at a speed of 0.3 m/s.e starting point and target point of the mobile robot are the same as those of the first simulation environment, and the second simulation environment is used to train the robot to plan its path in a dynamic environment. Simulation Experiment and Result Analysis. In order to verify the algorithm, the original DDPG and the improved DDPG mobile robot path planning algorithm are trained for 1500 rounds in the simulation environment with the same dynamic obstacles, and the total return value of the mobile robot in each round of simulation training is recorded.A graph with the number of training rounds as the abscissa and the return value of each training round as the ordinate is drawn as follows. e results of Figure 4 show that when the DDPG algorithm is trained in a dynamic obstacle simulation environment, the total return value curve has a gradual upward and downward trend with the increase of the number of training rounds, indicating that the mobile robot training in a dynamic obstacle environment does not converge at last.e total return value fluctuates greatly from 0 to 200 rounds, and the return value is mostly negative.is shows that the robot is "learning" to reach the target point.Observing the training process, it can be seen that the mobile robot collides with dynamic obstacles when gradually approaching the target point, and the total return value can reach 2400 or so in 200 to 400 rounds.Observing the training process, it can be seen that the robot can reach the target point in a few cases.After 400 rounds, the return value is high and low, and most of them are stable at about 500.Observing the training process, we can see that most of them end up colliding with dynamic obstacles. Figure 5 shows the return curve of DDPG algorithm training in a dynamic environment with transfer learning.It can be seen from Figure 5 that the robot will move towards the target point at the beginning because of the addition of transfer learning, so the learning negative value of the return curve is much less than that in Figure 4, but the overall curve is uneven and does not converge. Figure 6 shows the return curve of the DDPG algorithm trained in a dynamic environment with only priority experience replay.Because the priority experience replay eliminates a large number of invalid data in the early stage of training, it can be seen from Figure 6 that the return curve gradually increases from negative value and tends to be stable, but the convergence speed is slow and the success rate is low.Figure 7 is a return graph of DDPG algorithm training in a dynamic environment with priority experience replay and transfer learning.It can be seen from Figure 7 that the return value curve is improved compared with Figure 4, but the convergence effect has not yet been achieved. Figure 8 shows that when the DDPG algorithm with RAdam is introduced and priority experience replay and transfer learning are added to train in the dynamic obstacle simulation environment, the return value curve gradually increases and tends to be stable with the increase of the number of training rounds.e results show that the DDPG algorithm with RAdam is approximately convergent in the dynamic obstacle environment.Because through the combination of internal and external rewards, compared with Figure 7, the convergence speed and success rate are obviously improved, which shows that adding the RAdam algorithm optimizer has a better effect on the path planning of mobile robots in a dynamic environment. Figure 9 shows that when the improved DDPG algorithm (i.e., curiosity module is introduced on the basis of the RAdam algorithm optimizer) and priority experience replay and transfer learning are added to train in a dynamic obstacle simulation environment, the return value curve During the experiment, Rviz subscribes to the selfodometer message Odom released by the mobile robot, visualizes the pose information of the mobile robot at each moment in the form of coordinate axes, and gives the path planning to reach the target point before and after improvement, as shown in Figures 11-13. e time and path results of the mobile robot reaching the target point in the dynamic environment after the improvement of the DDPG algorithm are shown in Table 2.In order to ensure the validity of the results, the test results are averaged 10 times. According to the data in Table 2, it takes 86 seconds and 280 steps for the original DDPG algorithm to reach the target point in the path planning of the mobile robot in a dynamic environment of this article.Adding the RAdam neural network optimizer algorithm, the time is shortened to 76 s, and the step size is reduced to 250. e curiosity module is introduced on the basis of the previous one, the time reaches 60 s, and the step size is shortened to 210.rough experimental comparison, it is proven that the time and step size of the improved DDPG algorithm in mobile robot path planning in a dynamic environment are improved, which verifies the algorithm's effectiveness. Improved DDPG Algorithm 2.4.1.RAdam Optimization Algorithm Design.In deep learning, most neural networks adopt the adaptive learning rate optimization method, which has the problem of excessive variance.Reducing this difference problem can improve training efficiency and recognition accuracy. Figure 6 :Figure 7 : Figure 6: Add priority experience to replay the experimental results. Figure 12 : Figure 12: Dynamic obstacle path planning with the RAdam algorithm. Figure 11 : Figure 11: Dynamic obstacle path planning based on the original DDPG algorithm. E[(r t + rQ′(s t+1 , a t+1 ) -Q(s t ,a t ))] In order to verify the success rate of path planning of the trained models in the dynamic environment, the four training models were tested 50 times in the training dynamic environment, and the same test was carried out three times to get the average value.eexperiment is done three times to ensure the validity of the data.etestresultsand the time taken to train the model are recorded in Table1.Experimental results show that, in Table1, the success rate of the original DDPG algorithm can reach 50%, and the training time of the model is 14 hours.After only adding transfer learning, although the training time is reduced, the success rate is reduced.Only adding priority experience replay, the success rate increased to 70%, but the training time did not decrease.By adding priority experience replay and introducing transfer learning, the success rate increases to 74%, and the model training time is shortened to 13 hours.After adding the RAdam algorithm, the success rate increases to 86%, and the model training time is shortened to 12 hours.Finally, the success rate of the curiosity module is increased to 90%, and the model training time is shortened to 11 hours.Compared with the original DDPG algorithm, the convergence speed is increased by 21%, and the success rate is increased to 90%. Table 2 : Path effect comparison.Figure 13: Improved algorithm for dynamic obstacle path planning.
5,740
2021-11-12T00:00:00.000
[ "Computer Science", "Engineering" ]
D ec 2 01 9 Double parton scattering in pA collisions at the LHC revisited We consider the production ofW -boson plus dijet, W -boson plus b-jets and same signWW via double parton scattering in pA collisions at the LHC and evaluate the corresponding cross sections. The impact of a novel DPS contribution pertinent to pA collisions is quantified. Exploiting the experimental capability of performing measurements differential in the impact parameter in pA collisions, we discuss a method to single out such a contribution. The method allows the subtraction of the single parton scattering background and it gives access in a very clean way to double parton distribution functions in the proton. We show that in the Wjj andWbb channels the observation of DPS is possible with data already accumulated in pA runs and that the situation will improve for the next high luminosity runs. Finally for DPS observation in the ssWW channel one needs either significant increase of integrated luminosity beyond that foreseen in next runs or improved methods for W reconstruction, along with its charge, in hadronic decay channels. I. INTRODUCTION The flux of incoming partons in hadron-induced reactions increases with the collision energy so that multiple parton interactions (MPI) take place, both in pp and pA collisions. The study of MPIs started in eighties in Tevatron era [1,2], both experimentally and theoretically. Recently a significant progress was achieved in the study of MPI, in particular of double parton scattering (DPS). From the theoretical point of view a new self consistent pQCD based formalism was developed both for pp [3][4][5][6][7][8][9][10] and pA DPS collisions [11] (see [12] for recent reviews). From the experimental point of view, among many DPS measurements performed recently, the one in the W +dijet final state is of particular relevance for the present analysis. The corresponding cross section was measured in pp both by ATLAS and CMS [13,14] and the DPS fraction was found to be 5-8% of the total number of W +dijet events. Moreover recent observations of double open charm [15][16][17][18] and same sign W W (ssW W ) production [19] clearly show the existence of DPS interactions in pp collisions. The MPI interactions play major role in the Underlying event (UE) and thus are taken into account in all MC generators developed for the LHC [20,21]. On the other hand the study of DPS will lead to understanding of two parton correlations in the nucleon. In particular the DPS cross sections involves new non-perturbative two-body quantities, the so called two particle Generalised Parton Distribution Functions ( 2 GPDs), which encode novel features of the non-perturbative nucleon structure. Such distributions have the potential to unveil two-parton correlations in the nucleon structure [22,23] and to give access to information complementary to the one obtained from nucleon one-body distributions. The study of MPI and in particular of the DPS reactions in pA collisions is important for our understanding of MPI in pp collisions and it constitutes a benchmark of the theoretical formalism available for these processes. On the other hand the MPI in pA collisions may play an important role in underlying event (UE) and high multiplicity events in pA collisions. Moreover it was argued in [11] that they are directly related to longitudinal parton correlations in the nucleon. The theory of MPI and in particular DPS in pA collisions was first developed in [24], where it was shown that there are two DPS contributions at work in such a case. First, there is the socalled DPS1 contribution, depicted in the left panel of Fig.(1), in which the incoming nucleon emits two partons that interact with two partons in the target nucleon in the nucleus, making such a process formally identical to DPS in the pp collisions. Next there is a new type of contribution, depicted in the right panel of Fig. (1) and often called DPS2, in which the two partons emitted by the infalling nucleon interact with two partons each of them belonging to the distinct nucleons in the target nucleus located at the same impact parameter. Such a contribution is parametrically enhanced by a factor A 1/3 over DPS1 contribution, A being the atomic number of the nucleus. The basic challenge in observing and making precision studies of DPS both in pp and pA collisions is the tackling the large leading twist (LT), single parton scattering (SPS), background. This problem is especially acute in pA collisions where, due to several orders of magnitude lower luminosity relative to pp collisions, rare DPS cross sections will suffer serious deficit in statistics [25,26]. Recently a new method was suggested [27] which could allow the observation of DPS2 in pA collisions. It was pointed out that the DPS2 has a different dependence on impact parameter than LT and DPS1 contributions. Namely while the LT and DPS1 contribution are proportional to the nuclear thickness function T (B), B being the pA impact parameter, the DPS2 contribution is proportional to the square of T (B). Therefore the cross section producing a given final state can be schematically written as: where T (B) is normalized to the atomic number A of the nucleus. This observation gives the possibility to distinguish the DPS2 contribution in pA collisions from both the LT and DPS1 contributions that are instead linear in T (B). This approach was used in [27] to study two-dijets processes in pA collisions. The purpose of the present paper is to investigate whether the latter approach can be used to observe the DPS2 process in pA collisions for the following final states, ordered by decreasing cross sections: In all considered channels one electroweak boson (W ± ) is produced in one of the scatterings, which then leptonically decays into muon and a neutrino. A second scattering in the same pA collision produces the remaining part of the final state (jj, bb, W ± ). The first process, as it emerges from our simulations, has the advantage of higher statistics which could allow the characterization of the DPS cross section. The second one has been discussed in detail in [28] in pp collisions and, despite the lower rate, its study is relevant since DPS contribution is an important background to new physics searches with the same final state. The third one is a gold channel DPS reaction but suffer of very low cross sections [29][30][31]. We show in the following that in the W jj and W bb cases there is rather large number of events that allows to determine DPS2 already from data already recorded in pA runs in 2016 at the LHC. The situation will improve even more for the next runs for pA runs at LHC scheduled for 2024. On the other hand the ssW W process suffers from a rather low statistics, even for the next runs. Nevertheless we expect we shall be able to observe it in the future runs if W reconstruction techniques will allow to establish the W charge from its hadronic decays. The paper is organised as follows. In Section II we briefly review the theoretical framework on which are based our calculations. In the following three Sections we present our results for each considered final state and the corresponding discussion. Our findings are summarised in the conclusion. II. THEORETICAL FRAMEWORK The cross section for the production of final states C and D in pA collisions via double parton scattering can be written as the convolution of double 2 GPD G p ,G A of the proton and the nuclei [11]: Notably, two parton GPDs depend on the transverse momentum imbalance momentum ∆. Eq.(2) can be suitably extended to describe DPS in pA collisions [11,24]. Since our analysis will especially deal with impact parameter B dependence of the cross section, we find natural to rewrite Eq. (2) in coordinate space, introducing the double distributions D p,A which are the Fourier conjugated of G p,A with respect to the ∆. In such a representation the latter represent the number density of parton pairs with longitudinal fractional momenta x 1 , x 2 , at a relative transverse distance b ⊥ and do admit a probabilistic interpretation. In the impulse approximation for the nuclei, neglecting possible corrections due to the shadowing for large nuclei, and taking into account the fact that R A ≫ R p for heavy nuclei, we can rewrite the latter expression in b ⊥ space as [11,24] dσ CD DP S Here m = 1 if C and D are identical final states and m = 2 otherwise, i, j, k, l = {q,q, g} are the parton species contributing to the final states C(D). In Eq. (3) and in the following, dσ indicates the partonic cross section for producing the final state C(D), differential in the relevant set of variables, Ω C and Ω D , respectively. The functions f i appearing in Eq. (3) are single parton densities and the subscript N indicates nuclear parton distributions. All these densities do additionally depend on the factorization scales µ C(D) whose values are set to the largest scale produced in a given final state. The nuclear thickness function T p,n (B), mentioned in the Introduction and appearing in Eq. (3), is obtained integrating the proton and neutron densities ρ (p,n) 0 in the nucleus over the longitudinal component z where we have defined r, the distance of a given nucleon from nucleus center, in terms of the impact parameter B between the colliding proton and nucleus, r = √ B 2 + z 2 . Following Ref. [32], for the 208 P b nucleus, the density of proton and neutron is described by a Wood-Saxon distribution For the neutron density we use R n 0 = 6.7 fm and a n = 0.55 fm [33]. For the proton density we use R p 0 = 6.68 fm and a p = 0.447 fm [34]. The ρ (p,n) 0 parameters are fixed by requiring that the proton and neutron density, integrated over all distance r, are normalized the number of proton and neutron in the lead nucleus, respectively. As already anticipated, the DPS1 contribution, the first term in Eq. (3), stands for the 2 to 2 contribution at work in pp collisions. It does depend linearly on the nuclear thickness function T and therefore scales as the number of nucleon in the nucleus, A. The second term, the DPS2 contribution, contains in principle two-body nuclear distributions. We work here in the impulse approximation, neglecting short range correlations in the nuclei since their contribution may change the results by several percent only [27]. The latter term is therefore proportional to the product of one-body nucleonic densities in the nucleus, i.e. it does depend quadratically on T and, notably, it scales as A 4/3 . We shall work here for simplicity in the mean field approximation for the nucleon. In such approximation double GPD has a factorized form : where the function T ( b ⊥ ) describes the probability to find two partons at a relative transverse distance b ⊥ in the nucleon and it is normalized to unity. In such a simple approximation, this function does not depend on parton flavour and fractional momenta. Then one may define the so called effective cross section as which controls the double parton interaction rate. Under all these approximations the DPS cross section in pA collision can be rewritten as We find important to remark the key observation that leads to the second term of Eq. (3): namely that the b and B integrals practically decouple since the nuclear density does not vary on subnuclear scale [11,24,35]. As a result this term does depend on double GPD integrated over transverse distance b ⊥ , i.e. at ∆ = 0, for which we assume again mean field approximation: In the DPS1 term, deviations from the mean field approximation for 2 GPDs are taken into account at least partially by using in our calculations the experimental value of σ ef f measured in pp collisions. Additional corrections of order 10% − 20% to eq. (8) due to longitudinal correlations in the nucleon [11] and beyond mean field approximation will be neglected in the following. Note that after integration in b ⊥ , this will be the only nonperturbative parameter characterising the DPS cross section. We shall neglect small possible dependence of σ ef f on energy. Indeed while there is some dependence on energy in pQCD and mean field approach, it is at least partly compensated by nonperturbative contributions to σ ef f [36]. In this last part of the Section we specify the kinematics and additional settings with which we evaluate Eq.(8). We consider proton lead collisions at a centre-of-mass energy √ s pN = 8.12 TeV. Due to the different energies of the proton and lead beams (E p = 6.5 TeV and E P b = 2.56 TeV per nucleon), the resulting proton-nucleon centre-of-mass is boosted with respect to the laboratory frame by ∆y = 1/2 ln E p /E N = 0.465 in the proton direction, assumed to be at positive rapidity. Therefore the muon and jets rapidities, in this frame, are given by y CM = y lab −∆y which, given the rapidity coverage in the laboratory system |y lab | < 2.4, translates into the range −2.865 < y CM < 1.935. In all calculations, we have always considered proton-nucleon centre-of-mass rapidities. The relevant partonic cross sections have been evaluated at leading order [37] in the respective coupling differential in muon and/or jets transverse momenta and rapidities in order to be able to implement realistic kinematical cuts used in experimental analyses. For the jet cross sections, final state partons are identified as jets, as appropriate for a leading order calculation. We use CTEQ6L1 free proton parton distributions [38] and EPS09 nuclear parton distribution [39]. Consistently with the cross section calculations, both distributions have been evaluated at leading order with factorisation scales fixed to M W and/or the transverse momentum of the jets, depending on the considered final state. III. RESULTS : W ± jj In this Section we present results for the associated production of one electroweak boson in one of the scatterings, which then decays leptonically into a muon and a neutrino, and of a dijet system produced in the other. This process has been already analized in pp collisions at √ s=7 TeV by ATLAS [13] and CMS [14] whose results constitute therefore a solid baseline for this analysis. For this channel we define the fiducial phase space for muon in terms of its transverse momentum and rapidity by requiring that p µ T > 25 GeV and |y µ lab | < 2.4, which are mutuated from the analisys of Ref. [40]. The fiducial phase space for jets is given by p jets T > 20 GeV and |y jets lab | < 2.4. As already discussed, set aside the factorization hypothesis on double PDFs, the DPS2 term is largely free of unknowns. On the contrary, the DPS1 contribution needs, as input, a value for σ ef f . As already mentioned, both ATLAS [13] and CMS [14] have measured the DPS contribution to the W jj final state in pp collisions at √ s = 7 TeV and found σ ef f to be: σ W jj ef f = 15 ± 3 (stat.) +5 −3 (syst.) mb , σ W jj ef f = 20.7 ± 0.8 (stat.) ± 6.6 (syst.) mb . We combine these numbers intoσ ef f = 18±6 mb. Since we simulate pA collisions at √ s pN =8.12 TeV, a centre-of-mass energy close to the energies at which those values of σ ef f have been extracted, we use such an average in our numerical simulations for the W jj and W bb final states, neglecting any possible dependence of σ ef f on energy. The only source of theoretical systematic error that we associate to the predictions is the one relative to the σ ef f uncertainty. Theoretical errors due to missing higher orders can be kept under control by using higher order calculations, which are known and available in the literature. Uncertainties related to PDFs and nuclear effects are by far subleading in the present context. We are now in position to discuss our results. First we are interested to quantify at the integrated level the DPS2 contribution to DPS in pA collisions, which, despite having been predicted theoretically [24], has not been yet observed experimentally. For this purpose we first report in left column of Tab. I the values of the fiducial cross section for producing W jj final state via DPS mechanisms. These number accounts for W charged summed cross sections considered in both the muon and electron decay channels. From the table it appears that the DPS2 contribution is more than two times larger with respect to DPS1 one. With these numbers at our disposal we may use the strategy put forward in Ref. [27] to separate the DPS2 contribution. The latter exploits the experimental capabilities to accurately relate centrality with the impact parameter B of the pA collision. We start discussing the method by presenting in the left panel of Fig. (2) the W jj DPS cross section differential in impact parameter B. In the right panel of the same plot the same differential distribution is normalized to the nuclear thickness functions. With such a normalization, the DPS1 contribution will contribute a constant value to the cross section, as well as the LT background (not R(i, i0) shown in the plot), while DPS2 will show a B dependence driven by T (B). The DPS2 observation will essentially rely on the experimental ability to distinguish a non-constant behaviour of such a normalized distribution. The efficiency of this discrimination method will depend on the accumulated integrated luminosity. Here we choose a value in line with data recorded in 2016 pA runs of Ldt = 0.1 pb −1 . For this purpose we present in the central panel of Fig. (3) the number of DPS signal events for the W jj channel integrated in bins of B. The distribution presents a kinematic zero at B = 0 due to the jacobian arising from Eq. (3) when the cross section is kept differential in B. On the same plot is also superimposed the uncertainty on the predictions coming from the propagation of the error on σ ef f . Assuming that statistical errors follow a Poissonian distribution, we present in the right panel of Fig. (3) the expected number of signal events integrated in bins of B and normalized to the integral of the nuclear thickness function in that bin, n i 1 = d 2 B T A (B), where the integration is over the i-bin edges. It appears that the expected uncertainties will allow a discrimination of the non-constant DPS2 contribution. Quite interestingly the method can be applied to subtract the overwhelming LT contribution, or, at the least to complement the subtraction techniques already developed. For this purpose we may define the following quantity It is then easy to verify by integrating over two distinct B bins Eq. (1) that R is independent of the LT and DPS1 contributions. In Eq. (10) N i ev is the number of events in the i B-bin for the assumed integrated luminosity. The index i = 0 corresponds to the subtraction bin, chosen in the peripheral, yet not the most peripheral one. The choice of subtraction point was discussed in [27]. In our simulation we choose the subtraction bin to be the one for which 6 < B < 7 fm and indicate with N 0 ev the number of events in that bin. The resulting contribution is presented in the left panel of Fig. (3). Such quantity will be completely independent from LT SPS background and DPS1 contribution since both contribute a constant to the B distribution. The method will be as much efficient as experimental errors will allow to discriminate a non-constant behavior in the data. The number of DPS2 events in a given bin i can be restored from this quantity as where we have defined n i 2 = d 2 B T 2 A (B) and, again, the integration is over the i-bin edges. Given the large number of signal DPS events in the W jj, the characterization of the DPS cross section can be attempted by inspecting the charged lepton rapidity distributions. The latter are presented in the left panel of Fig. (4) for all different charge contribution and DPS mechanism and are obtained integrating over impact parameter B and over dijet phase space. As can be observed from the plot, the DPS1 and DPS2 mechanisms produce quite similar distributions in lepton rapidity and therefore such an observable is not expected to be able to discriminate among them. This conclusion, however, may change if correlations beyond the mean field approximation are sizeable and might eventually generate a distorsion of the spectra. Correlations beyond mean field approximation could also be appreciated by considering the lepton charge asymmetry, an extension of the familiar observable defined in SPS: The corresponding distribution is presented in the right panel of Fig. (4). Given the factorized ansatz for double PDFs and that the dijet system is completely integrated over, its lineshape is the same as the lepton charge asymmetry measured in SPS production of W ± in pA collisions, see for example Fig. (4) of Ref. [40]. Therefore, after proper subtration of LT and DPS1 contribution, the observation in data of any departure from the predicted line shape might be an indication of parton correlations not accounted for in the mean field approximation. IV. RESULTS : W bb We consider in this Section a special case of the former in which the second scattering produces a bb heavy-quark pair. This particular final state has ben analyzed in detail in pp collisions in [28] where a number of kinematic variables have been proposed to disentangle the signal DPS process from the SPS background. It is worth noticing that this final state is particularly important for R(i, i0) new physics searches so that the DPS component needs to be properly modelled. For this final state we useσ ef f = 18 ± 6 mb as for the W jj case. We define the fiducial phase space for muon in terms of its transverse momentum and rapidity by requiring that p µ T > 25 GeV and |y µ lab | < 2.4. The fiducial phase space for b-jets is given by p b−jets T > 20 GeV and |y b−jets lab | < 2.4. In this particular case, the factorisation scale for bb-jet system is fixed to the transverse mass of the jet. The W bb cross sections results are reported in the right column of Tab. I. As expected, they are reduced by two order of magnitude with respect to the W jj case. Assuming again a rather conservative scenario in which the integrated luminosity is Ldt = 0.1 pb −1 , we present the expected number of DPS signal events in the central panel of Fig. (5) we present, for this particular final state, the subtracted quantity defined in Eq. (10). From these plots it is clear that for this final state, given the lower number of events, the identification of a non-constant behaviour in data will be more difficult. Nevertheless, since at the B-integrated level, the DPS2 contribution is more than twice the DPS1 one, this channel has anyway the potential to allow the observation of the DPS2 mechanism. V. RESULTS : ssW W Double Drell-Yan like processes have been recognized as an ideal laboratory to investigate DPS [2,41] and its factorization property [42]. Among this class of process, the production of a same sign W boson pair (ssW W ), where each W -boson is produced in a distinct hard scattering, has received special attention [29,31,[43][44][45][46], since single parton scattering (SPS) at tree-level starts contributing to higher order in the strong coupling and can be suppressed by additional jet veto requirements. This process has been investigated in pA collisions in Ref. [47]. A measurement of the ssW W DPS cross section in pp collisions at √ s =13 TeV has been recently reported by the CMS collaboration [19]. In that analysis a value of σ ef f = 12.7 +5.0 −2.9 mb has been extracted and which will be used in our predictions, assuming that such a value is valid also at √ s pN =8. 16 TeV, the nominal energy at which we simulate pA collisions in this analysis. Again we assumed that its value is the same in both charged channels and the same across the fiducial phase space. Both W 's are required to decay into same sign muons being the fiducial phase space mutuated from the analysis of [40]: it is given by p µ T > 25 GeV for the leading muon, p µ T > 20 GeV for the subleading one and |y µ lab | < 2.4 for muons rapidities. We report the cross sections results in Tab. II for various DPS mechanisms and for separate dimuon charges configurations. In Fig. 6 we present the differential cross sections and the number of expected events for Ldt= 1 pb −1 , a value within reach at future pA runs at LHC. Considering all leptonic channels (µ ± µ ± , e ± µ ± , e ± e ± ), the resulting fiducial cross section is four times larger than that reported in Tab. II and it is of order 1 pb. These results are consistent with the ones reported in Ref. [47] after noting that those have been obtained at higher √ s pN = 8.8 TeV with respect to the one used here and that cross sections have been calculated there at next to leading order. Given these numbers we conclude that the observation of DPS in this channel will not only depend on the integrated luminosity accumulated in future pA runs but also on the experimental ability to reconstruct W 's and its charge via its hadronic decays. VI. CONCLUSIONS In this paper we have calculated DPS cross sections for a variety of final states produced in pA collisions at the LHC. We have discussed a strategy to separate the so called DPS2 contributions, pertinent to pA collisions, which relies on the experimental capabilities to correlate centrality with impact parameter B of the proton-nucleus collision. With this respect the W jj final state has large enough cross sections to allow the method to be used already with 2016 recorded data. Moreover the distribution in lepton charge asymmetry has the potential to uncover correlations in double GPD beyond the mean field approximation. The W bb finale state, having lower rate, can still be used at the inclusive level to search for the DPS2 contribution. The observation of the ssW W final state, being a clean but a rare process, will depend crucially on the running conditions of the future pA runs and W -reconstruction experimental capabilities.
6,325.2
2019-12-05T00:00:00.000
[ "Physics" ]
Localization of Near-Field Sources Based on Sparse Signal Reconstruction with Regularization Parameter Selection Source localization using sensor array in the near-field is a two-dimensional nonlinear parameter estimation problem which requires jointly estimating the two parameters: direction-of-arrival and range. In this paper, a new source localization method based on sparse signal reconstruction is proposed in the near-field. We first utilize l1-regularized weighted least-squares to find the bearings of sources. Here, the weight is designed by making use of the probability distribution of spatial correlations among symmetric sensors of the array. Meanwhile, a theoretical guidance for choosing a proper regularization parameter is also presented. Then one well-known l1-norm optimization solver is employed to estimate the ranges. The proposed method has a lower variance and higher resolution compared with other methods. Simulation results are given to demonstrate the superior performance of the proposed method. Introduction Source localization using sensor array is one of the most important topics in array signal processing society.A great number of source localization methods were proposed in the past few decades.However, most of these methods focused on source localization in far-field case, in which the signal can be regarded as planar wave and only direction-of-arrival (DOA) estimation is required.When the range between the sources and the array is not sufficiently large compared with the aperture of the array (i.e., in the near-field case), the wavefront of the signal at the array is characterized by both azimuth and range.Thus, the performance of the DOA estimation methods for far-field case degrades significantly in the near-field. In recent years, a majority of methods have been proposed to deal with the source localization problem in the nearfiled, such as maximum likelihood methods [1], the twodimensional MUSIC methods [2], high-order-cumulantsbased methods [3][4][5][6][7], and the linear prediction methods [8,9].However, most of these methods either require additional parameters pairing [8,9] or involve large computational cost due to multidimensional search [1,2] or the computation of cumulants [3][4][5][6][7].Furthermore, in order to take advantages of the symmetric property of the array, some other methods suffer from heavy aperture loss (i.e., at most sources can be detected when the number of sensors is 2 − 1 [10,11]).Recently, Malioutov et al. [12] proposed a DOA estimation method in far-field named L1-SVD, showing some advantages including high resolution and improved robustness to noise, to a limited number of snapshots and to correlation of sources.After that, several methods based on sparse signal reconstruction (SSR) were proposed to locate the near-field sources.Wang et al. [13] proposed a mixed source localization method based on sparse representation of cumulants, achieving a higher estimation accuracy.However, the method suffers from a heavy computational load for computation of cumulants.By representing the source range and DOA information as sensor-dependent phase progression, a Bayesiancompressed-sensing-based source localization method was proposed for uniform and sparse linear arrays [14].However, it also suffers from large computational cost because of iterations.By jointly using MUSIC and sparse signal reconstruction, Tian and Sun [15] also proposed a source localization 2 International Journal of Antennas and Propagation method for mixed sources.By making use of the spatial correlations of symmetric sensors output, an SSR-based source localization method was proposed by Hu et al. [16,17], showing superior performance.However, by employing 1regularized least-squares optimization to find the sparse solution, the regularization parameter was selected manually by cross validation, which causes the method to be unable to use in practice.Moreover, it is prone to be selected improperly. In this paper, a novel SSR-based source localization method is proposed in the near-field.Firstly, just like the method in [16], by exploiting the spatial correlations of symmetric sensors output, the azimuth and range are decoupled so that a two-dimensional parameter estimation problem in the near-field is converted into a DOA estimation one in the far-field.Secondly, the theory of 1 -regularized weighted least-squares optimization is employed on the virtual farfield array to acquire DOA estimation.Meanwhile, similar to [18], an approach is presented to choose regularization parameter.At last, L1-SVD is utilized to estimate the ranges of the sources. The paper is organized as follows.Section 2 describes the data model of source localization.An existing SSR-based method for source localization in the near-field is reviewed in Section 3. The proposed method for DOA and range estimation is presented in Sections 4 and 5, respectively.Simulation results are shown in Section 6. Section 7 concludes this paper. Data Model Consider this case in which near-field narrowband sources impinge onto a uniform linear array with = 2 + 1 elements, as depicted in Figure 1.The received signal of th sensor can be expressed as where () represents the th source signal, V () denotes the additive noise received by the th sensor, , stands for the range between the th source and the th sensor, refers to the range between the th source and the reference sensor of the array, is the wavelength of the narrowband signals, and denotes the number of snapshots.It can be easily derived from Figure 1 that where denotes the DOA of the th source.Let x() = [ − (), −+1 (), . . ., ()] , s() = [ 1 (), 2 (), . . ., ()] , and k() = [V − (), V −+1 (), . . ., V ()] denote the received signal vector, the source signal vector, and the noise vector, respectively.By stacking all { (), = −, − + 1, . . ., } into a vector, we arrive at For convenience we make the following assumptions: (A1) The source signals are uncorrelated to each other and independent of the noise. (A2) The noises are spatially uncorrelated Gaussian white noise. The DOA Estimation Method in [16] Under the above assumptions, the spatial correlation between the th and the th sensor output can be expressed as International Journal of Antennas and Propagation 3 where 2 , stands for the power of the th source signal, 2 V represents the noise power, and (⋅) denotes the Dirac function.Note that when = −, the spatial correlation is uncorrelated with the parameter .Thus, we have Stacking all (, −) from - to , we obtain where Comparing (10) with (7), r behaves like the received signal of a far-field array with array manifold Ã() and source signal r , corrupted by the noise 2 V e.Note that the two-dimensional (DOA and range) estimation problem has been transformed into a one-dimensional (DOA) estimation one now. Then, the one-dimensional estimation problem can be cast into a sparse signal recovery problem as follows.Define a set Θ = { 1 , 2 , . . ., } as the sampling grid corresponding to the DOAs of the potential sources.The number of the potential sources should be much greater than the number of the real sources and the number of sensors .The overcomplete basis A() can be constructed as where = −(2/) sin .The sparse signal is represented by a vector p ∈ ×1 , whose th element should be a nonzero weight = 2 , if the th source comes from the direction for some and zero otherwise; that is, the sparse vector p acts as the spatial spectrum.Thus, the sparse signal recovery model is formulated as A usual way to solve the sparse signal p is using the wellknown ℓ 1 -regularized least-squares minimization method, which is given by [16] p = arg min where denotes the regularization parameter, which balances the data-fitting error and the sparsity of p.It is important to select the parameter properly since it has a great impact on the spatial spectrum.If the parameter is too small, some of the peaks in the spatial spectrum will disappear.On the contrary, spurious peaks arise when the parameter is too large.In [16], the parameter is selected manually by cross validation, which not only causes inconvenience and improper selection but also results in unavailability in practice. The Proposed DOA Estimation Method Under the assumptions (A1) and (A2) in Section 2, the covariance matrix of the received signals can be expressed as where Σ = diag(r ).In practice, the real covariance matrix is unavailable; however, it can be consistently estimated by The estimate error is Without consideration of the error caused by Fresnel approximation, the vector form of ΔR satisfies [19] Δr where vec(⋅) refers to the vectorization operation, AsN(, 2 ) represents asymptotic normal distribution with mean and variance 2 , and the symbol ⊗ denotes Kronecker product.Define r V ≜ vec(R).It can be verified that the two vectors r and r V satisfy r ( + 1 − ) = r V ( ( − 1) + 1) , = 1, 2, . . ., . (18) Let r denote the estimate of r and rV = vec( R) be the estimate of r V = vec(R); obviously, we have r ( + 1 − ) = rV ( ( − 1) + 1) , = 1, 2, . . ., . (19) The estimation error of r is defined as Then, the ( + 1 − )th element of the estimation error Δr can be written as which implies where International Journal of Antennas and Propagation By taking advantages of the property of normal distribution, it can be derived from ( 17) and ( 22) that To fit the data r to its data model well while finding the sparsest solution p, it is better to employ the weighted leastsquares method; that is, or where W is a weighted matrix, W −1/2 denotes the Hermitian square root of W −1 , and is regularization parameter.As stated before, it is of significant importance to select the regularization parameter properly.Here, similar to [18], an approach to choose is given as follows. In order to fit the data r to its data model well, W is set as asymptotic covariance matrix of Δr ; that is, W = C(R ⊗ R)C /.From (24), it can be derived that Hence, we obtain where As 2 () represents the asymptotic chi-square distribution with degrees of freedom.To solve the problem in (26), we introduce another parameter and should choose it high enough so that the probability of holds true with a high probability ℎ .We can find a confidence interval for ‖W −1/2 Δr ‖ 2 2 and use its lower bound as a choice of .Generally, it is enough to choose ℎ = 0.999 to determine the parameter . Now, the problem in (26) can be converted into where and ‖ ⋅ ‖ denotes the Frobenius norm.Equation (30) can be solved by a MATLAB toolbox named CVX [20]. Range Estimation In this section, the approach of L1-SVD [12] is exploited to estimate the range of the sources.Let θ denote the estimated DOAs from previous section.We define the potential source grid as ( θ, r) = {( θ1 , 1 ), ( θ1 , 2 ), . . ., ( θ1 , ), ( θ2 , 1 ), . . ., ( θ , )} to construct an overcomplete basis B( θ, r) = [b( θ1 , 1 ), b( θ1 , 2 ), . . ., b( θ1 , ), b( θ2 , 1 ), . . ., b( θ , )] ∈ × , where is the number of the potential ranges.The source locations are assumed to happen to be located at the grid.Then, the observed signals can be rewritten into a matrix form as where and S is a row sparse matrix, the [( − 1) + ]th row of which is nonzero and equal to a vector [ (1), (2), . . ., ()] if a source comes from ( θ , ) for some and a zero vector otherwise.In order to reduce the computational cost and the influence of noise, we use the singular value decomposition (SVD) of the received signal matrix X.Take the SVD to decompose the data matrix into signal and noise subspaces and keep a reduced × dimensional matrix X , which represents the signal subspace X = ULD = XFD , where Here, I refers to a × identity matrix and 0 is a ×(−) matrix of zeros.Furthermore, let S = SFD and V = VFD ; then we can obtain which can be written into a vector form as x () = B ( θ, r) s () + k () , = 1, 2, . . . ,. (34) Apparently, the two matrices X and X share the common row sparsity.However, the difference between the two matrices is that the column of the matrix X is indexing by time samples while that of matrix X is indexing by singular vector number.To solve the sparse signal recovery problem, the ℓ 2 norm of all singular vector index of a particular spatial index of s () is calculated first; that is, )) 2 ; then we impose ℓ 1 norm penalty into all (ℓ 2 ) for = 1, 2, . . ., .As a result, we can estimate the sparse matrix by minimizing the cost function where s . The ranges can be obtained by finding the largest peaks of s (ℓ 2 ) once the matrix S is acquired. Simulation Results In this section, some numerical experiments are given to show the effectiveness and efficiency of the proposed method.We make a comparison in terms of RMSE and resolution ability between the proposed method and the method in [16], both of which are based on the theory of sparse signal recovery.In the following simulations, the sources and noises are modeled as white Gaussian signals temporally and spatially, and 200 Monte Carlo trials are performed to calculate the average result for each experiment.The method in [16] The proposed method The 2nd source for the method in [16] the method in [16] the proposed method the proposed method 6.1.Spatial Spectra.Firstly, we present an experiment to compare the proposed method with the method in [16] in terms of spatial spectra.Consider that two closely spaced signals located at {16, −2.6 ∘ } and {23, 2.4 ∘ } impinge on a ULA with 15 sensors.The intersensor spacing of the ULA is assumed to be /4.The angular and range spatial spectra are depicted in Figures 2 and 3, respectively. According to Figures 1 and 2, it can be noted that the proposed method shows the same sharp peaks as the method in [16].However the proposed method achieves lower errors for range and DOA estimation compared to the method in [16].The method in [16] The proposed method RMSE versus SNR. Subsequently, we investigate RMSE of DOA and range estimation versus SNR.To make a fair comparison, the two near-field sources are moved to {17, −18.3 ∘ } and {11, 5.6 ∘ }.When SNR varies from 0 dB to 14 dB with a step 2 dB, by averaging 200 snapshots, we depict the RMSE of DOA and range estimation in Figures 4 and 5, respectively.It can be clearly seen that the proposed method achieves a lower estimate error compared to the method in [16] for both DOA and range estimation. RMSE versus the Number of Snapshots. In the second experiment, we evaluate RMSE of DOA and range estimation as a function of the number of snapshots.The parameters are kept the same as before except SNR = 10 dB; the RMSE of DOA and range estimation with respect to the number of snapshots are illustrated in Figures 6 and 7.According to the The method in [16] The proposed method two figures, it can be discovered that the proposed method shows a lower RMSE than the method in [16] for different number of snapshots.The method in [16] The proposed method be clearly noted that the proposed method achieves higher resolution than the method in [16]. Resolution Ability versus the Number of Snapshots . Now, we assess the angular resolution ability for the above two methods as a function of the number of snapshots.The parameters used in this experiment are kept the same as the previous one except SNR = 10 dB. Figure 9 shows the resolution ability versus the number of snapshots.The results in Figure 9 indicates that the proposed method has higher resolution compared to the method in [16]. According to the results from the above simulation experiments, it can be concluded that the proposed method shows a better performance compared to the method in [16] International Journal of Antennas and Propagation 7 in terms of both RMSE and resolution ability, mainly because the idea of ℓ 1 -regularized weighted least-squares is utilized in the proposed method. Conclusions In this paper, a novel near-field source localization approach is proposed for a uniform linear array.Firstly, just like the method in [16], we convert a two-dimensional source localization problem into a one-dimensional DOA estimation one by employing the correlations of symmetric sensors of the array.Then, the method of ℓ 1 -regularized weighted leastsquares is exploited to estimate the DOAs of the sources.Meanwhile, a theoretical guidance for selecting the regularization parameter is also presented.At length, the L1-SVD method is used to find the ranges of sources based on the estimated DOAs.Future research includes low computational complexity method for source localization based on sparse signal recovery since the computational cost of the proposed method is a little high. Figure 2 : Figure 2: Angular spatial spectra for the two methods. Figure 3 : Figure 3: Range spatial spectra for the two methods. Figure 6 :Figure 7 : Figure 6: RMSE of DOA estimation with respect to the number of snapshots. Figure 9 : Figure 9: Angular resolution ability as a function of the number of snapshots.
4,071.2
2017-05-11T00:00:00.000
[ "Engineering", "Computer Science" ]
Defining a Conformational Consensus Motif in Cotransin-Sensitive Signal Sequences: A Proteomic and Site-Directed Mutagenesis Study The cyclodepsipeptide cotransin was described to inhibit the biosynthesis of a small subset of proteins by a signal sequence-discriminatory mechanism at the Sec61 protein-conducting channel. However, it was not clear how selective cotransin is, i.e. how many proteins are sensitive. Moreover, a consensus motif in signal sequences mediating cotransin sensitivity has yet not been described. To address these questions, we performed a proteomic study using cotransin-treated human hepatocellular carcinoma cells and the stable isotope labelling by amino acids in cell culture technique in combination with quantitative mass spectrometry. We used a saturating concentration of cotransin (30 micromolar) to identify also less-sensitive proteins and to discriminate the latter from completely resistant proteins. We found that the biosynthesis of almost all secreted proteins was cotransin-sensitive under these conditions. In contrast, biosynthesis of the majority of the integral membrane proteins was cotransin-resistant. Cotransin sensitivity of signal sequences was neither related to their length nor to their hydrophobicity. Instead, in the case of signal anchor sequences, we identified for the first time a conformational consensus motif mediating cotransin sensitivity. Introduction Signal sequences of secretory and integral membrane proteins are mediators of the early steps of protein biogenesis and transport in cells [1][2][3]. After their synthesis at cytosolic ribosomes, signal sequences bind the signal recognition particle (SRP) and initiate targeting of the ribosome/nascent chain/SRP complex to the SRP receptor of the translocon machinery at the endoplasmic reticulum (ER) membrane. Signal sequences are also involved in translocon gating. They bind to the cytosolic side of the protein-conducting Sec61 channel and destabilize its closed conformation. Secretory proteins possess invariantly signal sequences which are located at the N-terminus of the proteins, the so-called signal peptides (SP). SPs are usually cleaved off following translocation of the proteins across the ER membrane. The signal sequences of integral membrane proteins do not mediate transfer across the ER membrane but integration of the proteins into the bilayer. Integral membrane proteins may also possess SPs. The majority, however, contain so-called signal anchor sequences (SAS) which are uncleaved and form part of the mature protein (usually the first transmembrane domain, TM1). Cotransin is a derivative of the fungal substance HUN-7293 [4,5]. Like Hun-7293, cotransin was shown to inhibit the Sec61 protein-conducting channel of the translocon complex in the presence of specific SPs [4,5]. As a consequence, the cotranslational translocation of the target proteins is prevented in a SP discriminatory mechanism of action. Originally, only a small subset of proteins was reported to possess cotransin-sensitive SPs and it was suggested that cotransin is a rather selective substance. The originally described group of cotransin-sensitive proteins is formed by vascular cell adhesion molecule 1 (VCAM1), P-selectin, angiotensinogen, β-lactamase and the G protein-coupled corticotropin-releasing factor receptor type 1, an integral membrane protein [4]. Recently, another G protein-coupled receptor, namely the endothelin B receptor was shown to be cotransin-sensitive [6]. Interestingly, no SAS-containing protein was found among this original subset of proteins. However, recent results showed that at least one SAS, namely that of the tumor necrosis factor alpha (TNF-α) is also cotransinsensitive [7,8]. The detailed mechanism of action of cotransin or the other cyclodepsipeptides is still not completely clear. To date, it was shown that cotransin neither affects SRP binding nor targeting of the ribosome/nascent chain/SRP complex to the ER membrane [4]. Crosslinking experiments suggested that the substances interact with the Sec61α subunit (protein-conducting channel) of the translocon complex [5] and dislocate the sensitive nascent chains to the Sec61β subunit [5,9]. These data suggest that the cyclodepsipeptides may compete with the sensitive SPs for binding to a specific acceptor site within the Sec61α subunit [10]. To date, a consensus motif mediating cotransin sensitivity was not described although some critical residues were identified in sensitive signal sequences. [7,9,10]. Moreover, it is not known how selective cotransin is, i.e. how many proteins are indeed sensitive and whether SASs may also be affected in significant amounts. To address all these questions, we performed a proteomic study and analyzed the expression of secreted and integral membrane proteins of the human hepatocellular liver carcinoma cell line (HepG2) following cotransin treatment at saturating concentrations (30 μM). Sensitive proteins were identified using the stable isotope labelling by amino acids in cell culture technique (SILAC) in combination with quantitative mass spectrometry. (Mountain View, CA, USA), the HepG2 cells were a gift of G. Püschel (Potsdam, Germany). The RotiLoad sample buffer was from Carl Roth (Karlsruhe, Germany). Plasmid constructions and site-directed mutagenesis Standard DNA manipulations were carried out. The AQP2 cDNA was cloned into a the vector plasmid pEGFP-N1 thereby replacing the stop codon of AQP2. The resulting fusion construct WT.AQP2 encodes AQP2 C-terminally tagged with GFP. Introduction of the putative conformational consensus motif into the SAS of AQP2 (combined point mutations F25G, F26G, G27L, Q33K) was carried out by site-directed mutagenesis using the QuickChange site-directed mutagenesis kit from Stratagene (Heidelberg, Germany) according to the supplier's recommendations. The resulting mutant was CM.AQP2. The truncated construct WT.AQP2.NT encodes an N-terminal EGFP fusion to an AQP2 fragment (amino acid residues 1-40 of AQP2) consisting of the N terminus, TM1 and the first extracellular loop in the pEGFP-C1 vector from Clontech. Mutant CM.AQP2.NT (combined point mutations F25G, F26G, G27L, Q33K) was derived by site-directed mutagenesis as described above. The nucleotide sequences of all plasmid constructs were verified by sequencing (Source Biosciences Lifesciences, Berlin, Germany). Total secreted proteins from the combined cell culture media of the light and heavy sample (see above) were precipitated by adding 1 volume of 100% trichloroacetic acid (TCA) to 4 volumes of the cell culture medium and incubated at 4°C for 10 min. After centrifugation (13,000 x g, 5 min), proteins were washed 3 times with 200 μl cold acetone. The proteins were resuspended, reduced and alkylated for SDS-PAGE in Rotiload sample buffer as described above for the total membrane proteins. All experiments for total secretory and membrane proteins were repeated with switched isotopic coding (forward and reverse experiment). Protein identification by mass spectrometry Tryptic digest of proteins following SDS-PAGE and nano liquid chromatography-mass spectrometry/mass spectrometry (LC-MS/MS) experiments were essentially done as described [17]. In brief, gel slices were washed with 50% (v/v) acetonitrile in 50 mM ammonium bicarbonate, dehydrated in acetonitrile and dried in a vacuum centrifuge. The dried gel pieces were reswollen in 20 μl of 50 mM ammoniumbicarbonate containing 50 ng of trypsin (sequencinggrade). After overnight incubation at 37°C, 15 μl of 0.3% TCA in acetonitrile was added and the separated supernatant was vacuum-dried. Prior to mass spectrometry (MS) analysis, the peptides were dissolved in 6 μl of 0.1% TCA and 5% acetonitrile in water. Liquid chromatography (LC) separations were performed on a capillary column (Pep-Map100, C18, 3 μm, 100 Å, 250 mm × 75 μm i.d.) at an eluent flow rate of 200 nl/min using a gradient of 3-50% mobile phase B in 90 min. Mobile phase A contained 0.1% formic acid in water and mobile phase B contained 0.1% formic acid in acetonitrile. Mass spectra were acquired in a data-dependent mode with one MS survey scan (with a resolution of 30,000) in the Orbitrap and MS/MS scans of the four most intense precursor ions in the linear trap quadrupole. Identification and quantification of proteins were carried out with version 1.0.13.13 of the MaxQuant software package as described [18]. Data were searched against the international protein index human protein database (version 3.52). The mass tolerance of precursor and sequence ions was set to 7 ppm and 0.35 Da, respectively. Methionine oxidation and the acrylamide modification of cysteine were used as variable modifications. False discovery rates were <1% based on matches to reversed sequences in the concatenated target-decoy database. Bioinformatic tools For the hydrophobicity analysis, the grand averages of hydropathicity (GRAVY) values were determined for the signal sequences using the GRAVY calculator software (S. Fuchs, University of Greifswald, Greifswald, Germany). The frequency of signal sequence hydrophobicity (sorted in classes of hydrophobicity ranging from 0-100 in steps of 4 in a scale of 0-25) was plotted against total hydrophobicity of the signal sequences (ranging from 0-100). For the signal sequence length analysis, the frequency of signal sequence length (sorted in classes ranging from 0-50 amino acid residues in steps of 4 in a scale from 0-100) was plotted against total length of the signal sequences. The signal sequence alignments were prepared using the ClustalW software (European Bioinformatics Institute, EBI, Cambridge, UK) and manual refinements. The conformational consensus motif in the signal sequences was identified using the fuzzpro application of the EMBOSS calculator and the Geneious Pro software 5.4.4 (available from http://www.geneious. com) [19]. The same software was used for the motif screen. Helical structure prediction and surface visualization of the signal sequences were carried out using the PyMol software package (Schrödinger Inc., Cambridge, MA, USA). Detection of cotransin-sensitive secreted and integral membrane proteins by immunoblotting HepG2 cells (2 x 10 6 ) were grown in a 60 mm diam. dishes for 24 h. Cells were washed twice with phosphate-buffered saline (PBS; pH 7.4), incubated in serum free medium for 3 h, washed again and cultured in serum free medium containing cotransin (30 μM) for another 17 h. Total secreted and integral membrane proteins were isolated using the cell fractionation protocol (see above) and resuspended in Roti-Load sample buffer. Proteins were separated on a SDS gradient gel (5-12%, 20 mA, proteins from 5 x 10 5 cells/lane) and detected by immunoblotting [20] using the monoclonal or polyclonal antibodies against the target proteins and peroxidaseconjugated anti-mouse or anti-rabbit IgG respectively (see the Material paragraph for the antibodies and dilutions). Blocking of unspecific interactions was carried out using 5% skim milk powder in PBS. The intensities of the protein bands were quantified densitometrically using the NIH image analysis software (ImageJ Version 1 NT and 0.8 μg of ECFP-ER using PEI according to the supplier's recommendations. After another 24 h of incubation, the coverslips were transferred into a self-made chamber (details on request) and covered with PBS without Ca 2+ and Mg 2+ . Fluorescence signals were visualised using the laser scanning microscope system LSM710-ConfoCor3 (Carl Zeiss Microscopy GmbH, Jena, Germany, 63x/1.3 oil objective). The GFP fluorescence signals of WT.AQP2.NT or CM.AQP2.NT were detected on one channel (argon laser λ exc = 488 nm, emission 491-603 nm band pass filter) and the CFP fluorescence signals of ECFP-ER on the second channel (argon laser λ exc = 458 nm, emission 461-501 nm band pass filter) using a multi beam splitter MBS 488 (channel one) and a MBS 458 (channel two). The overlay of the signals was computed. Images were analyzed using the ZEN 2010 software (Carl Zeiss Microscopy GmbH, Jena, Germany). (B) Colocalization of the soluble (unfused) GFP protein and the plasma membrane stain trypan blue. HEK 293 cells were transiently transfected with the vector plasmid pEGFP-C1 as described above. After 24 h of incubation, the trypan blue solution was added to the cells (final concentration 0.05%, w/v) and cells were incubated for 1 min. The GFP fluorescence signals were detected on one channel (argon laser λ exc = 488 nm, emission 491-603 nm band pass filter) and trypan blue signals on the second channel (argon laser λ exc = 561nm, emission 564-704 band pass filter) using a multi beam splitter MBS 488 (channel 1) and a MBS 561 (channel two). The overlay of the signals was computed. Images were analyzed using the ZEN 2010 software (Carl Zeiss Microscopy GmbH, Jena, Germany). Flow cytometry biosynthesis inhibition assay HEK 293 cells (4.5 x 10 5 ) grown on 12-well plates for 20 h were transiently transfected with 1.2 μg plasmid DNA and PEI per well according to the suppliers's recommendations. Cells were incubated for 4.5 h and treated for 19 h with cotransin (final concentration 10 μM or 1-50 μM for a concentration-response curve) or cycloheximide (final concentration 0.1 μg/ml) or DMSO (negative control). Final DMSO concentration in all samples was 1.5%. Cells were washed twice with PBS and the GFP fluorescence signals of the constructs were analyzed by flow cytometry using a FACSCalibur system (BD Biosciences, USA). For each sample, total fluorescence intensity of 1 x 10 4 cells was analysed using the BD CellQuest Pro software (BD Biosciences, USA). The total amount of GFP fluorescence was normalized by subtracting the background of nontransfected HEK 293 cells. To eliminate the portion of the GFP fluorescence present at time t 0 of cotransin treatment (i.e proteins synthesized during the 4.5 h incubation time following transfection which may endure thereafter the cotransin incubation time), we subtracted the value of cycloheximide-treated cells. In the case of the concentration response curve, data of the cotransin-treated cells were normalized to the DMSO control (100%). Statistics Unless otherwise indicated, analyses were performed using the Student's t-test (GraphPad t test calculator, GraphPad Software, Inc., La Jolla, CA); p values < 0.001 were considered to be significant. Cotransin sensitivity of secretory and integral membrane proteins of HepG2 cells The SILAC approach used [14][15][16] is outlined in Fig. 1A. HepG2 hepatocytes were used because they are known to secrete a broad range of major plasma and other secretory proteins. Moreover, these cells are easy to wash because of their epithelial morphology. For the few sensitive proteins reported, the IC 50 values for cotransin-mediated biosynthesis inhibition were in a range of 0.5-5 μM [4,6]. To identify less-sensitive proteins and to discriminate the latter from completely resistant proteins, we used a cotransin concentration of 30 μM for our study which is a saturating concentration taking the reported IC 50 values into account. It was previously demonstrated that cotransin does not affect transcription [4] and does not cause cytotoxicity at a concentration of 30 μM [6]. This may be explained by the fact that cotransin treatment usually does not lead to a complete inhibition of the biosynthesis of the sensitive proteins. 14 N 4 L-Arginine ("light" sample) or 13 C 6 L-Lysine and 13 C 6 15 N 4 L-Arginine ("heavy" sample). Cells were treated for 17 h with cotransin (30 μM) or DMSO and pooled cell lysates or supernatants containing total secretory or integral membrane proteins were separated using a SDS gradient gel (4-12%). The proteins bands were cut out, proteins were digested using trypsin and finally analysed using LC-MS/MS. The forward experiment is shown. For the reverse experiment isotopic coding was inverted. B. SILAC results. Shown are dotplots for secretory (left panel) and integral membrane Cells were grown in medium either containing 12 C 6 L-Lysine and 12 C 6 14 N 4 L-Arginine ("light" sample) or 13 C 6 L-Lysine and 13 C 6 15 N 4 L-Arginine ("heavy" sample) (Fig. 1A). Cells of the light sample were treated with cotransin whereas cells of the heavy sample served as a DMSO-treated control. Total secreted proteins of both samples were mixed, isolated, and separated by SDS-PAGE. Proteins were in-gel digested with trypsin and the resulting peptides were subjected to LC-MS/MS analysis. For the analysis of the integral membrane proteins, labelling and cotransin or DMSO-treatment of the cells were performed accordingly. Total cell lysates of the light and heavy samples were mixed and cell fractionations were performed. Proteins of crude membrane preparations were separated by SDS-PAGE, in-gel digested and analysed using LC-MS/MS as described above for the secretory proteins. All experiments were performed twice in a crossover mode, meaning that the light and heavy labelled SILAC samples were treated with cotransin and DMSO, respectively, and vice versa. In these experiments, we considered proteins as cotransin-sensitive if the ratio of protein expression following DMSO or cotransin treatment (DMSO/cotransin) was higher than 1.65 in the forward and reverse experiment. A total of 217 proteins could be detected in substantial amounts in both experiments: 53 of them were secreted proteins and 164 integral membrane proteins ( Fig. 1B; see the S1 Table for the complete dataset). Surprisingly, 50 out of the 53 secretory proteins (all SP) showed a significant decrease in protein expression following cotransin treatment. In contrast, only 21 of the integral membrane proteins were cotransin-sensitive (SP = 9; SAS = 11; signal sequence not specified = 1) whereas 143 were non-sensitive (SP = 47; SAS = 94; signal sequence not specified = 2). Thus, at this saturating concentration, cotransin does not discriminate between different secretory proteins which are all more or less sensitive. Instead, the substance discriminates mainly between secretory and integral membrane proteins. Integral membrane proteins possessing SASs may also be cotransinsensitive Secretory proteins possess invariantly cleavable SPs. In the case of integral membrane proteins, only a minority possesses SPs, the majority contains SASs. An obvious hypothesis to explain the fact that at saturating concentration almost all secretory but only a few integral membrane proteins were cotransin-sensitive is, that the substance might affect selectively SP-containing proteins. However, among the 21 sensitive integral membrane proteins found, 9 possess SPs and 11 SASs (see the S1 Table). These results show that at least in the case of the sensitive integral membrane proteins identified in our study, SPs and SASs seem to be affected without preference. Confirmation of the SILAC results by expression analysis of specific proteins To confirm the results of the SILAC experiments, expression of selected secretory or integral membrane proteins was analysed in cotransin-treated (30 μM) or DMSO-treated HepG2 cells by SDS-PAGE/immunoblotting with specific antibodies for the target proteins (Fig. 2). Secreted proteins were precipitated from the cell culture medium by TCA. For the detection of the membrane proteins, crude membrane fractions were used following cell fractionation. The proteins (right panel) which were identified and quantified by LC-MS/MS analysis (see also the S1 Table for a Confirmation of cotransin sensitivity and cotransin resistance of selected proteins using SDS-PAGE/ immunoblotting. After transfection and treatment with cotransin (17 h, 30 μM) (+) or with DMSO (-) secretory and integral membrane proteins were isolated from HepG2 cells. The proteins were identified by SDS-PAGE/immunoblotting using specific primary antibodies and horseradish peroxidiseconjugated anti-mouse or anti-rabbit IgG as secondary antibodies. The cytosolic GAPDH protein does not protein glyceraldehyde-3-phosphate dehydrogenase (GAPDH) served as a control for a nonsensitive protein since this cytosolic protein does not contain a signal sequence. In the case of the secretory protein apolipoprotein B-100 (Apo B100) (SP) and the membrane proteins cadherin-2 (CDH2) (SP) and Erlin-2 (Erlin2) (SAS), expression was substantially decreased following cotransin treatment confirming the SILAC data (Fig. 2). The cotransin resistance of the secretory protein plasminogen activator inhibitor 1 (PAI-1) and the membrane proteins calnexin (CNX) (SP) and Claudin-1 (CLDN1) (SAS) could also be proved indicating that the generated dataset is reliable. Bioinformatic analysis of the SILAC dataset reveals a putative conformational consensus motif which may mediate cotransin sensitivity of SASs To date, a consensus sequence mediating cotransin sensitivity of proteins is unknown. It is also not clear whether cotransin interacts with signal sequences directly in the protein-conducting channel or via an indirect mechanism. In the case of the SPs of VCAM1 and the vascular endothelial growth factor, amino acid residues were characterized which are responsible for the sensitivity to the HUN-7293 derivative CAM741 [9]. However, these SPs did not share sequence similarities and consequently a consensus sequence could not be defined [10]. We used the dataset of our SILAC study and bioinformatics tools to identify properties discriminating sensitive and non-sensitive signal sequences. Secretory proteins possess invariantly SPs and the vast majority of those we identified were cotransin-sensitive. Since SPs are usually less hydrophobic than SASs, cotransin sensitivity may correlate with signal sequence hydrophobicity. However, such a correlation could not be found (Fig. 3A). In fact, hydrophobicity is rather variable among the sensitive SPs of secretory proteins and among sensitive and non-sensitive signal sequences of membrane proteins. Another possibility is that cotransin sensitivity may be related to signal sequence length. Likewise, it was not possible to obtain such a correlation either (Fig. 3B). We next aligned all available sensitive and non-sensitive signal sequences to look for a consensus motif which may be associated with cotransin sensitivity. We failed to detect such a sequence for sensitive SPs in agreement with previous results [9,10]. However, it was possible to derive a putative conformational consensus motif in the case of all 12 sensitive SASs of integral membrane proteins (Fig. 4A; cytosolic N tail: 10 sequences; extracellular N tail: 1 sequence; N tail orientation not specified: 1 sequence). The central part of this motif contains two patches of small amino acid residues (Gly, Ala, Ser or Thr, Cys), the first formed by two and the second either by one or two residues. These two patches are separated by two or three bulky amino acid residues and flanked on one or both sides either by large polar, charged or aromatic residues (Fig. 4A). Assuming a strict α-helical structure of the SAS, the structural consequence of this motif is the formation of two distinct cavities formed by the small amino acid residues in the surface of the helical structure (Fig. 4B). In the case of the non-sensitive membrane proteins, the motif was found only in 5 out of 143 sequences demonstrating a highly significant correlation between the presence of the motif in an SAS and its cotransin sensitivity (p value < 0.0001 contain a signal sequence and served as a control for a non-sensitive protein. As examples for secretory proteins (all SPs) cotransin-sensitive Apo B-100 and cotransin-resistant PAI-1 were used. For membrane protein possessing SPs, cotransin-sensitive CDH2 and cotransin-resistant CNX were analyzed. As examples for membrane proteins containing SASs, cotransin-sensitive Erlin2 and cotransin-resistant CLDN1 are shown. The immunoblots are representative of three independent experiments. The bar graphs shown at the right side of each immunoblot represent mean intensities of the respective protein bands of these three independent experiments ±SD (densitometric analysis using ImageJ). doi:10.1371/journal.pone.0120886.g002 according to Fisher´s exact test) [21]. The motif is also present in the SAS of the TNF-α which was previously reported as cotransin-sensitive [8] (Fig. 4A, last sequence in the alignment). As mentioned above, the motif is not present in SPs. None of the 50 sensitive SPs of secretory proteins and only 2 of the sensitive SPs of integral membrane proteins carried a similar sequence. A. Sequence alignment. The sequences of the 12 sensitive SASs, identified in this study, are shown in grey (TM1 of the proteins). Cotransin sensitivity decreases from top to bottom. The sequence of the SAS of the TNF-α, which was previously shown to be cotransin-sensitive [8] is shown separately. Nc indicates a SAS with an N-terminus oriented towards the cytoplasm, Ne indicates a sequence with an N-terminus facing the extracellular site (ER: luminal side); n/s indicates an as yet non-specified N tail orientation. The consensus motif consists of two groups of small amino acids in the central part (black), which are separated by two or three bulky amino acid residues and Experimental confirmation that the putative conformational consensus motif mediates cotransin sensitivity of SASs To demonstrate that the identified sequence is indeed responsible for the cotransin sensitivity of SASs, we introduced the motif into a cotransin resistant SAS. We took the aquaporin 2 (AQP2) water channel protein as a model, a hexahelical integral membrane protein with cytosolic N and C tail. To introduce the two cavities of the motif, the point mutations F25G, F26G, G27L, Q33K were introduced into the TM1 of a C-terminally GFP-tagged variant of AQP2 (resulting constructs: mutant F25G, F26G, G27L, Q33K = CM.AQP2; wild type = WT.AQP2). The structural consequences of these mutations are shown in Fig. 4C; the resulting two cavities are indicated by black arrows. HEK 293 cells were transiently transfected with the constructs and treated with cotransin (10 μM) or DMSO solvent or cycloheximide (0.1μg/ml). After 19 h of incubation, the total GFP fluorescence of 1 x 10 4 cells was analyzed using flow cytometry as a measure of biosynthesis. To avoid falsification of the results by proteins already synthesized at t 0 of cotransin treatment, cycloheximide values were subtracted. In the case of mutant CM. AQP2, a significant reduction of the GFP fluorescence signals was observed indicating that introduction of the consensus sequence by the 4 point mutations indeed induced cotransin sensitivity of the SAS (Fig. 5A). Using the same flow cytometry assay but variable cotransin concentrations, a concentration-response curve could be derived for the cotransin-mediated biosynthesis inhibition of CM.AQP2 (Fig. 5B). The calculated IC 50 value of 6.5 μM is comparable to those described previously (e.g. endothelin B receptor = 5.4 μM; reference 6). However, under careful consideration, it can not be excluded that the mutations led to a non-functional signal anchor sequence of AQP2 in the above experiment. In this case, one of the more C-terminal located transmembrane domains could function as an alternative SAS [22] which may possess the observed higher cotransin sensitivity in comparison to the wild type TM1. To rule out this possibility, truncated variants of the above proteins were constructed encoding only the N tail, TM1 and the first extracellular loop of AQP2. Both constructs were N-terminally tagged with GFP (resulting constructs: mutant F25G, F26G, G27L, Q33K = CM.AQP2.NT; wild type = WT.AQP2.NT). In these fusions, TM1 is the sole transmembrane domain which can function as a SAS. To analyze targeting of the constructs to the ER membrane we used a previously published microscopical assay which is based on the localization of the fluorescence signals of GFP fusion proteins [23,24]. If a fused sequence could function either as a SP or a SAS, the GFP moiety is targeted to the ER membrane leading to a reticular GFP fluorescence pattern typical for the ER. Under these conditions, the nucleus is free of GFP fluorescence. If a fused sequence was unable to function as a signal sequence, GFP is located in the cytosol leading to a diffuse fluorescence pattern filling the cell's interior. Moreover, due to the nuclear targeting signal of GFP [25] fluorescence signals are also detectable in flanked by large charged, large polar or aromatic amino acids residues (white with black frame). The black box below the alignment assigns the possible positions of the amino acid residues in the motif: (#) and (*) indicate the positions of the small amino acid residues forming the first and second cavities respectively; (~) and ($) indicate the amino acid residues separating and flanking the small residues respectively. The abbreviations are: IMP2C, integral membrane protein 2 C; IMP2B, integral membrane protein 2 B; HLA II, HLA class II histocompatibility antigen gamma chain; TMEM230, transmembrane protein 230; MHC I, MHC class I antigen; ECE-1, endothelin-converting enzyme 1; Acyl-CoA, Acyl-CoA desaturase; TM Prot2, transmembrane protein 2; Erlin1, Erlin-1; LIMP2, lysosome membrane protein 2; Erlin2, Erlin-2; AGPR1, asialoglycoprotein receptor 1; TNF-α, tumor necrosis factor-α. B. Exemplary αhelical (left) and solvent-reachable surface projection (right) of the SAS of IMP2B. Green colour represents the small amino acid residues forming the two cavities, red colour the separating residues and blue colour the flanking residues. The two cavities are indicated by arrows. C. Introduction of the identified conformational consensus motif into the SAS of the cotransin-resistant AQP2 protein. Upper panel: Alignment of the SAS (highlighted in grey) of construct WT.AQP2 and mutant CM.AQP2. In the case of CM.AQP2, the motif consists of the flanking amino acid residues (white with black frame), the small residues forming the cavities in the surface of the helix (black) and the separating more bulky residues lying between the cavities. Lower panel: α-helical structure of WT.AQP2 and mutant CM.AQP2 visualized using the program PyMol (left side). The structural level is shown by the solvent reachable surface of the αhelices (right side). Green colour represents the small amino acid residues forming the two cavities, red colour the separating residues and blue colour the flanking residues. The two cavities, which are a result of the mutations, are indicated by black arrows. the nucleus. HEK 293 cells were transiently co-transfected with the constructs and the ER marker ECFP-ER and colocalization of the GFP fluorescence signals with the signals of ECFP-ER was analyzed using LSM (Fig. 5C, upper panel). Both WT.AQP2.NT and CM.AQP2. NT showed a reticular fluorescence pattern and could be readily colocalized with the ER marker. We also used unfused soluble GFP as a control for a protein which does not contain a SAS. In contrast to WT.AQP2.NT and CM.AQP2.NT, this protein was distributed diffusely throughout the cells and was also transferred to the nucleus (Fig. 5C, lower panel). These data show that the SAS of CM.AQP2.NT is still functional, despite the mutations. Taken together, these results demonstrate that the identified putative conformational consensus motif of SASs is indeed involved in mediating cotransin sensitivity. Discussion In the previous studies for cotransin and the related cyclodepsipeptides [5,6,[8][9][10]] only a small subset of sensitive proteins was identified. In particular proteins with lower sensitivity were not studied systematically and not differentiated from completely resistant proteins. Moreover, it was unknown whether SASs may be affected by cotransin in significant amounts. A consensus motif in signal sequences mediating cotransin sensitivity could also not be defined. We performed a proteomic SILAC approach on HepG2 cells using saturating cotransin concentrations (30 μM) to address these questions. Under these conditions, almost all identified secretory proteins were cotransin-sensitive whereas the majority of the integral membrane proteins were resistant. Given the fact that the SILAC experiments failed to detect expression of very lowly expressed proteins and that HepG2 cells do not express the complete proteome, it is conceivable that the number of the actual cotransin-sensitive proteins is still underestimated. The idea to develop cotransin analogues affecting the synthesis of individual proteins, i.e. to get from a selective to a specific substance by derivatization [6], will thus be difficult to achieve. In our study, only two secretory proteins were found to be completely cotransin-resistant, namely the plasminogen activator inhibitor 1 and calumenin [26][27][28][29]. Although both proteins contain SPs according to the UniProt data base [30][31][32], it should be analyzed whether these proteins may use a secretion pathway independent of Sec61 such as the recently described Sec62 pathway [33]. Interestingly, the expression of one protein, namely the placental protein 12 [34] seems to be up-regulated following cotransin treatment. In this particular case, cotransin may strengthen the interaction of the putative SP with Sec61. More likely, however, the substance could inhibit the biosynthesis of an unknown protein involved in down-regulation of placental protein 12. Our results for integral membrane proteins rule out the possibility that cotransin might affect exclusively SPs of membrane proteins since a significant amount of SASs was inhibited, too. Moreover, in the case of sensitive integral membrane proteins, the data revealed that both types of signal sequences seem to be affected without preference. concentrations of cotransin (1-50 μM in 1.5% DMSO). Shown are mean values of three independent experiments ± SD. Fluorescence was quantified by flow cytometry measurements as above and data were normalized to the DMSO control (1.5% DMSO). The calculated IC 50 value is indicated. C. Upper Panel. Colocalization of the GFP fluorescence signals of the truncated constructs WT.AQP2.NT and CM.AQP2.NT (left side, green) with the CFP fluorescence signals of the cotransfected ER marker ECFP-ER (middle, red). The fluorescence signals were recorded using confocal LSM and computer-overlayed (right side; colocalization is indicated by yellow color). The xy-scans show representative cells and are representative of three independent experiments. The cartoon on the right side shows a schematic depiction of the constructs. The 4 point mutations of construct CM.AQP2.NT are indicated by (****). Lower panel. Subcellular localization of unfused soluble GFP (control protein which does not contain a signal sequence). For clarity, the GFP fluorescence signals (left side, green) and those of the plasma membrane dye trypan blue (middle, red) were recorded in this case and computer overlayed (right panel). The xyscans show representative cells and are representative of three independent experiments. doi:10.1371/journal.pone.0120886.g005 Using bioinformatic tools, we could prove that cotransin sensitivity of a signal sequence neither correlates with signal sequence length nor with its hydrophobicity. Moreover, a general consensus motif mediating cotransin sensitivity, which is present in both SPs and SASs, does obviously not exist. In the case of SPs, the failure to define a consensus sequence is in agreement with previous results [9,10]. In contrast to the situation with SPs, however, we were able to define a conformational consensus motif in sensitive SASs. By introduction of the motif into the cotransin-resistant SAS of AQP2, the functionality of this motif could be confirmed. In a recent study for TNF-α, residues T45 and T46 of its SAS were shown to determine cotransin sensitivity [7]. These findings are easily explicable by our data: residues T45/T46 build the first cavity of the conformational consensus motif which is also present in the SAS of TNF-α (Fig. 4A, last sequence in the alignment, first black-shaded letters). Taken together, our results may lead to a modified (but still highly speculative) working hypothesis for the mechanism of action and selectivity of cotransin. As suggested previously, cotransin may replace sensitive signal sequences from their acceptor sites in Sec61 [5,9]. The fact that the motif was only detectable in SASs but not in SPs suggests that there may be (at least slightly) different binding sites for signal sequences in Sec61 with variable responsiveness to the compound. The majority of the SASs are cotransin-resistant and these sequences may interact with contact sites which are not accessible for the substance. Sensitive SASs may bind to an alternative site which can be influenced by the compound. Binding behaviour of these sensitive SASs may then be determined by the identified conformational consensus motif. The various SPs could again interact with a slightly different site which is in principle cotransinsensitive but whose responsiveness is more distinctly dependent on cotransin concentrations. Whereas the characterization of a sequence motif mediating cotransin sensitivity of a subgroup of proteins represents a step forward, the mechanism of action and selectivity of cotransin is far away from a complete understanding. Most progress would come, of course, from the identification of the binding sites of cotransin itself. Supporting Information S1 Table. Cotransin-sensitive and non-sensitive secretory and integral membrane proteins detected by SILAC and quantitative mass spectrometry. The UniProt identification number, ratio of the forward and backward experiment, signal sequence type (Sp or SAS) and signal sequence length are indicated. (PDF)
7,954.8
2015-03-25T00:00:00.000
[ "Biology" ]
X-Mark: a benchmark for node-attributed community discovery algorithms Grouping well-connected nodes that also result in label-homogeneous clusters is a task often known as attribute-aware community discovery. While approaching node-enriched graph clustering methods, rigorous tools need to be developed for evaluating the quality of the resulting partitions. In this work, we present X-Mark, a model that generates synthetic node-attributed graphs with planted communities. Its novelty consists in forming communities and node labels contextually while handling categorical or continuous attributive information. Moreover, we propose a comparison between attribute-aware algorithms, testing them against our benchmark. Accordingly to different classification schema from recent state-of-the-art surveys, our results suggest that X-Mark can shed light on the differences between several families of algorithms. Introduction Networks are the natural way to express phenomena whose unit elements exhibit complex interdependent organization. During the last decades, the availability of data expressing meaningful complex structures has increased significantly; hence, the definition of network science [as] the study of the collection, management, analysis, interpretation, and presentation of relational data (Brandes et al. 2013), built on top of the mathematical tools of graph theory. Among the massive number of complex network fields and sub-fields, community discovery (henceforth, CD) is one of the most important and critical tasks, aiming to group the actors of a system according to the relations they form. The lacking of general criteria-from the ill-posed definition of community to the uncountable number of alternative approaches-leads to the challenging problem of evaluating the quality of the resulting CD partitions. Classically, both internal measures and external methodologies have been provided to test the goodness or the quality of the CD algorithms. An internal evaluation adopts a quality measure to assess the welldefined structural segmentation of the communities; conversely, an external evaluation aims to estimate the agreement between the communities and a possible ground-truth partition. In real-world networks, ground-truths are often defined by one specific property/attribute whose values are attached to the nodes. Several epistemological issues behind the practice of evaluating CD outputs against such groundtruths were recently investigated (Peel et al. 2017); although some possible variants to the issue (Rabbany and Zaïane 2015), real-world networks are not recommended for testing purposes. Another option consists of adopting synthetic benchmarks designed explicitly to mimic the meso-scale level of real-world networks by building artificially planted sets of communities and evaluate the CD algorithm performances on various difficulty levels. Moreover, driven by the homophily principle (McPherson et al. 2001), node attributes are often used to improve CD-or, at least, redefine it w.r.t. external aspects )-by leveraging both topological and label-homogeneous clustering criteria. The node-attributed network encodes information about the node's properties/qualities, in form of attributes, accordingly to the general purposes of feature-rich networks (Interdonato et al. 2019), where the goal is to merge the graph topology together with other possibly meaningful external information. In the redefinition of the CD task-known as nodeattributed or labeled CD task (henceforth, LCD)-the aim is to find well-connected communities that are also homogeneous w.r.t. the attributes carried by the nodes. It follows that the evaluation environment should be improved at the same time. Thus, for testing LCD algorithm outputs, only connectivity-based benchmarks are not enough. Motivated by all the above-mentioned evaluation issues, often not approached in a systematic manner in the LCD task, we aim to address them in this work (i) by building a synthetic generator with attribute-aware planted communities, X-Mark, and (ii) by testing different LCD approaches against it. In detail, our two main contributions are to provide a new benchmark for testing LCD algorithms, then carefully evaluate them being aware of the class they belong according to state-of-the-art taxonomies, by highlighting their ability to perform better/ worse on incrementally complex real-world scenarios. The rest of the paper is organized as follows. In Sect. 2, we will review the state-of-the-art of attribute-aware network models, synthetic benchmarks, and LCD approaches. In Sect. 3, we will introduce X-Mark our node-attribute enriched network generator that handles label-homogeneous communities, embedding both categorical and continuous attributes. In Sect. 4, we will test some LCD families of approaches against X-Mark, to prove to what extent the algorithms can reconstruct the artificial communities embedded in the benchmark. Finally, Sect. 5 will conclude the work, summarizing the results and possible future lines of research. Related work An overview of several topics is needed to provide the full context surrounding the present work, i.e., the state-ofart about network models, synthetic generators, and LCD techniques. Network models Network models aim to capture and replicate some essential properties underlying real-world phenomena, from heavy-tailed degree distributions to high clustering coefficients and short average path lengths [i.e., small-world properties (Watts and Strogatz 1998)], as well as nonzero degree-degree correlation, community structure, and homophily. The well-known Preferential Attachment mechanism (henceforth, PA) of the Barabási-Albert model (Barabási and Albert 1999) generates scale-free networks with a power-law degree distribution, following the principle that the more connected a node is, the more likely it is to receive new links. Extensions of PA include steps for the formation of triads (Holme and Kim 2002), or for allowing the growth of degree-assortative networks (Catanzaro et al. 2004) or communities with power-law distributions (Xie et al. 2007). Alternative approaches-such as the Community Guidance Attachment and Forest Fire Models (Leskovec et al. 2005)-can exploit other network properties, e.g., self-similarity and hierarchies, for generating community structure. Network models that include homophily in the generative process aim to study how such a principle can influence the properties and the evolution of a system. A standard procedure shared by several models is that the probability of forming connections depends both on the degree (i.e., PA) and the attributes the nodes encode (Gong et al. 2012;Pasta et al. 2014;Kim and Altmann 2017;Shah et al. 2019). Several analytical experiments suggest that modeling homophily-aware networks produces interesting results. In Kim and Altmann (2017), the authors observe different shapes of the cumulative degree distributions, which transform from concave to convex when homophily is forced to have a substantial role in the generative process; such a convexity is interpreted as the power of homophily to amplify the rich-get-richer effect (more than considering only the PA); in Pasta et al. (2014), it is observed that high degree assortativity acts as a negative force to generate homophilic networks. moreover, the mechanism of focal closure (i.e., the formation of links between similar nodes without common neighbors) differs from structural closure (Murase et al. 2019), and their cumulative effects imply the formation of coreperiphery structures (Asikainen et al. 2020). In the context of opinion dynamics, several works introduce homophilyaware network generators for exploiting controlled analysis of human dynamics: false uniqueness and false consensus are amplified in heterophilic and homophilic networks, respectively (Lee et al. 2017); higher homophilic networks exhibit meaningful community structure and have a role in the formation and cohesion of groups (Gargiulo and Gandica 2016). In such models, it is worth noticing that communities are not built-in, since they are extracted a-posteriori with a CD algorithm. These example leads us to make an important distinction between network modeling and synthetic benchmarks. Synthetic benchmarks Synthetic benchmarks allow researchers to evaluate their algorithms on data whose characteristics resemble those observed in real-world networks. Contrary to network models, the rationale behind the construction of synthetic benchmarks is to use groundtruths to evaluate the fitness of the partitions resulting from CD methods. Among the most famous generators used for classic CD, we find the Girvan-Newman (GN) (Girvan and Newman 2002) and the Lancichinetti-Fortunato-Radicchi (LFR) (Lancichinetti et al. 2008) benchmarks, as well as the family of stochastic blockmodels (SBMs) (Holland et al. 1983;Karrer and Newman 2011). The GN benchmark Girvan and Newman (2002) is a graph of 128 nodes with an expected degree of 16, divided into four communities of equal sizes. Two parameters identify the probabilities of intra-and inter-clusters links, respectively. The LFR benchmark (Lancichinetti et al. 2008) allows for a user-defined number of nodes and distributes both node degrees and communities size according to a power-law. A parameter (i.e., the structure mixing ) identifies the fraction of links that a node has to share with other nodes in its cluster, while the remaining fraction is shared with random nodes in other parts of the graph. In the SBM (Holland et al. 1983), nodes are assigned to one of k user-defined communities; then, the links are placed independently between nodes with probabilities that are a function of the community membership of the nodes; a degree-corrected version of SBM allows to identifying heterogeneous node degrees (Karrer and Newman 2011). Such methods are designed to evaluate static graph partitions and do not natively support the generation/analysis of node-attributed graphs. Homophily-aware synthetic benchmarks are developed to cope with the limitation of such classic benchmarks, allowing for a more reliable controlled environment testing for LCD methods. Among the benchmarks specifically designed to generate node-attributed networks with communities, we find LFR-EA (Elhadi and Agam 2013), ANC , and acMark (Maekawa et al. 2019). In LFR-EA (Elhadi and Agam 2013), the LFR benchmark is extended with a noise parameter that controls the percentage of homogeneity within communities. The user can define the number of attributes and the number of values for each attribute, as well as the percentage of random sampling with or without replacement (i.e., how the values distribute among the communities). Interesting LCD testing against LFR-EA can be found in Pizzuti and Socievole (2018) and Berahmand et al. (2020). In ANC , nodes with only continuous attributes are generated, whose values are sparse out through a userdefined standard deviation parameter; some representative nodes of each community are initialized, then a K-medoids clustering is performed to build communities, and a userdefined number of intra and inter-links is generated. The node community assignment depends only on the labels of representative nodes. An LCD testing against ANC can be found in Falih et al. (2017) and Liu et al. (2020). In acMark (Maekawa et al. 2019), a bayesian approach is used to generate node-attributed graphs with communities. It enables to specify various degree distributions, cluster sizes, and both categorical and continuous attribute types. Finally, it is also worth mentioning a set of works modifying SBMs to cope with node covariates, as in Tallberg (2004), where this is achieved via a multinomial probit model. Often referred as CSBMs (covariate stochastic blockmodels) (Sweet 2015), they consist of a hybrid between the network models and synthetic benchmarks previously mentioned. Since they can create networks with communities correlated with node attributes, they often purpose to test the ability of algorithms to make use of metadata (i.e., whether they can be helpful to the LCD task). The work in Newman and Clauset (2016) gives a prototypical example of this, where a correlation between structure and attributes is created matching the latter ones with the true community assignments of nodes in an SBM; this approach is found to be effective also for generating multi-layer synthetic networks with ground-truth (Contisciani et al. 2020), and in the network inference problem by systematically studying the influence of the attributes on the correlation between network data and metadata (Fajardo-Fontiveros et al. 2021). Other attributed SBMs can be found in Hric et al. (2016), where a multi-layerbased approach allows developing one layer modeling relational information between attributes and the other one modeling connectivity, then assigning nodes to communities maximizing the likelihood of the observed data in each layer; in Stanley et al. (2019), a similar approach is able to handle multiple continuous attributes. Other than augmented-SBMs, in Emmons and Mucha (2019), instead, the map equation is modified to control the varying importance of metadata with a tuning parameter. Labeled or node-attributed community discovery LCD focuses on obtaining structurally well-defined partitions that also result in label-homogeneous communities. Several comparative studies and survey have been proposed to classify the large and increasing amount of node-attributed CD algorithms by leveraging taxonomies that allow grouping the algorithms according to the point-of-view adopted for the clustering step. Figure 1 summarizes them. While Bothorel et al. (2015) proposes a preliminary low-level classification, Falih et al. (2018) aggregates the algorithms into three general families: ( i a ) topological-based, ( ii a ) attributed-based, and ( iii a ) hybrid approaches. Such a taxonomy focuses primarily on how the original graph is manipulated for taking attributive information into account, namely ( i a ) attaching it to the topology, or ( ii a ) merging them together at the expense of the original links, or ( iii a ) using an ensemble method. The important aspect of time (e.g., modifying the original structure before or contextually to the clustering step), leads (Chunaev 2020) to propose a different classification schema: algorithms are grouped according to the moment when structure and attributes are fused, distinguishing between ( i b ) early-fusion, ( ii b ) simultaneous fusion, and ( iii b ) latefusion approaches. Just to give an idea of the complexity of defining appropriate taxonomies, an approach like CESNA (Yang et al. 2013), built on top of a probabilistic generative process while treating node attributes as latent variables, can be viewed either as a hybrid or a simultaneous fusion approach, but also as an approach similar to the hybrid network models outlined in the previous paragraph. For a review of some specific LCD algorithms, demanding detailed information is left to the mentioned surveys. Nevertheless, the LCD approaches that we test against X-Mark will be described better in the appropriate analytical section. X-Mark Throughout the work, we refer to the following definition of the node-attributed graph: Definition 1 (Node-attributed Graph) G = (V, E, A) is a node-attributed graph, where V is the set of nodes, E the set of edges, and A a set of categorical or continuous attributes such that A(v), with v ∈ V , identifies the set of categorical or continuous values associated to v. X-Mark 1 aims to generate an undirected and unweighted node-attributed graph G along with an attribute-aware planted partition C while guaranteeing: (i) power-law node degree and (ii) community size distribution; (iii) userdefined noise distribution within homogeneous communities; (iv) user-defined intra/inter-community edge distribution. In detail, X-Mark network generation procedure works as reported in Algorithm 1 -subject to the controlling parameters summarized in Table 1. In detail, it articulates into four steps: Step 1: Nodes generation and degree assignation -subject to the average degree ⟨k⟩ and the power-law exponent (line 1, Algorithm 1); Step 2: Community size sequence generation imposing the power-law exponent (line 2), and identification, for each attribute, of the representative label of each community, sampled from m cat or m cont (line 3-4) 2 ; in detail: (i) for each categorical attribute, a random assignment from the m cat possible values in the domain of the attribute, where m cat ≥ 2 ; (ii) for each continuous attribute, a random assignment from an ad-hoc multimodal distribution having m cont possible peaks, where m cont ≥ 2 , and the first peak has mean 0, while the other ones are values positively distant from the previous peak; Step 3: Communities and node's attribute generation (lines 5-8) handling different strategies for categorical and continuous attributes, i.e.,: (i) for each categorical attribute, assign to the node the same value of its community with probability 1 − ; (ii) for each continuous attribute, assign to the node a value picked from a normal distribution assuming the community label as the distribution mean and as it standard deviation; Step 4: Edge sampling -subject to the expected ratio among intra/inter-community edges as expressed by the mixing parameter (line 9), as previously defined in Lancichinetti et al. (2008). Among the model hyper-parameters reported in Table 1, the following are peculiar to X-Mark: (i) : it tunes the level of noise within each community. A low value of implies the emergence -within each community -of a majority label, with = 0 modelling the extreme scenario where all the nodes within a community share the same categorical attribute value; (ii) : it affects the speed at which the benchmark starts to produce less well-separated clusters according to the attribute values distribution: in this work, we impose = 10 ; (iii) m cat and m cont are integers modeling the domain for categorical and numerical attributes respectively; in the rest of the article, for the sake of simplicity, we will implicitly treat such parameters as lists of integers, meaning that each attribute has its proper m value in the range expressed by the list. X-Mark characterization In this subsection, we provide an overview of some X-Mark characteristics. For this purpose, we introduce a set of measures for the analysis; then, we split the study according to the differences between the categorical and the continuous attributes modeling. Evaluation Measures To characterize the behaviour of the model in presence of categorical attributes, we relate the observed and expected label homophily. In detail, we calculate the observed homophily, H, as the probability that two nodes share the same attribute value, and compare it to the expected one, H exp , namely, the probability that a randomly chosen node pair shares the same attribute value. Formally: Since H and H exp do not take the homophilic contribution of each community/node into account explicitly, we also provide (i) a function capturing noise within communities (i.e., the percentage of the majority attribute value within a cluster), namely Purity (Citraro and Rossetti 2019), and (ii) two measures for explaining the homophilic contribution of each node, namely Peel's assortativity (Peel et al. 2018) and Conformity . Given a community C, its purity P c is the product of the frequencies of the most frequent categorical attribute values carried by the nodes within C, formally: where A is the attribute value set, a ∈ A is an attribute value, and a(v) is an indicator function that takes value 1 iff a ∈ A(v) . The purity of a complete partition is then the average of the purities of the communities that compose it: Since homophily H gives only one global score, we might not identify the contribution of single nodes or observe differences between the intra-and inter-homophilic connections. Peel's assortativity and Conformity compute for each node its homophilic embeddedness within the neighborhood it belongs. We evaluate continuous attributes, using the Within-Cluster Sum of Squares (WCSS): where, for each community C from i to k, M is the centroid of nodes within the community. Moreover, we leverage the concept of silhouette for represent graphically how well clusters are tight and separated to each other. Detailed information is left to the reference paper (Rousseeuw 1987). Finally, to analyze the degree of connectivity of homogeneous clusters, we compute the modularity score, e.g., the fraction of the edges that fall within the given community C minus the expected fraction if they were distributed following a null model. where m is the number of graph edges, A v,w is the entry of the adjacency matrix for v, w ∈ N , k v , k w the degree of v, w and (c v , c w ) identifies an indicator function taking value 1 iff v, w belong to the same community, 0 otherwise. Categorical attributes. In this scenario, homogeneous communities are well-connected sets of nodes within which most of them share the same attribute value. The parameter models the percentage of nodes labeled according to a randomly assigned attribute value among the user-defined m cat possible ones; the remaining fraction is labeled according to the preferred community value. Thus, imposing = 0.2 means that at least 80% of nodes within a community share the same attribute value. The rationale behind the inclusion of the majority value justify the case of a binary categorical attribute (i.e., m cat = 2 ), where = 1 leads to a lower bound of observed homophily of 0.5. Figure 2 a shows the value of H as function of and . We focus on two different setups, m cat = 2 and m cat = 5 : in the former, the minimum observed homophily is around 0.5 (as H exp , not displayed); in the latter, the minimum observed homophily is around 0.3 (as H exp , not displayed). In general, the plots in Fig. 2 a show us how X-Mark can implicitly model homophily by only considering clusters homogeneity. Indeed, H decreases as both randomly rewired connections and attribute noise within communities increase; e.g., for high values of and (i.e., from 0.6 to 0.9), H and H exp tend to coincide, with the consequence of creating a very hard scenario for all structural-only, attribute-only and attribute-aware CD strategies. To better understand how homophily emerges from such parameters, we analyzed the network node-centric homophilic behaviour. Peel's assortativity and Conformity give us two different points of view. In Fig. 2 c, we show the local homophily scores of the two measures for the outlined setups. In particular, two peaks emerge when well-defined (i.e, well-connected and homogeneous) communities are modelled (i.e., = 0.2 and = 0.2 ), telling us that the network has a large (majority) homophilic behavior, but smaller heterophilic zones emerge mostly from inter-cluster noise. Noisy communities decrease the within-cluster homophilic contribution even if the former ones are well-connected (i.e., = 0.2 and = 0.8 ). The distributions observed for both measures describe similar scenarios: nodes tend to concentrate around a mean value neither homophilic nor heterophilic, except for very well-defined and homogeneous communities. To conclude, it follows that clustering modularity only depends on the parameter , and clustering purity, only on the parameter . Figure 2b summarizes it. Continuous attributes Considering a continuous attribute scenario, homogeneous communities are clusters with low standard deviations. As outlined in Fig. 3 (the leftmost 3D plot of the figure), the Within-Cluster Sum of Squares (WCSS) increases as increases, independently from the structure mixing parameter . Modeling continuous attributes by controlling m cont allows deducing the number of dense and well-separated clusters -in particular, when using low values. In Fig 3, we show some examples, by using the following m cont configurations on two networks with low ( = 1.5 ) and relatively high ( = 7.5 ) standard deviations, respectively: m cont = [2, 2] , m cont = [2, 4] , and m cont = [3, 3] . Indeed, well-separated clusters are visible when is low. We executed K-Means (MacQueen 1967) over the network configured with m cont = [2, 2] to show that the centroid-based clustering algorithm is able to recognize automatically the number of planted components from the attribute point-ofview. On the other hand, such well-separated clusters do not match with the planted component of communities emerging from the structural point-of-view, i.e., the number of communities subject to the parameter. We can continue to refer to the first ones as the attribute-component of the partition, and to the second ones as its structural-component. Indeed, the differences among those two components are relevant since they induce potentially distinct, although meaningful, clustering. In Fig. 3, we show the silhouette scores of each clustering found by K-Means with (i) k = 4 (optimal value suggested by the elbow method), and (ii) k equal to the number of planted communities generated by X-Mark. The silhouette scores are different, and less qualitatively good clusters are found according to the latter strategy, i.e., while considering the structure-point-of-view to tune an attributeonly clustering approach. With this last point, we anticipate one of the fundamental problems dissected in the next section: how to combine the attribute-component view and the structural one while performing an attribute-aware graph clustering? Experiments This section provides an analytical framework of comparison between LCD algorithms against X-Mark. We compare the algorithms by considering the several classification schema emerging in LCD literature, as we discussed in Section 2. Algorithms We compare ( i a ) topological-based, ( ii a ) attributed-based, and ( iii a ) hybrid algorithms, contextually to ( i b ) early-fusion, ( ii b ) simultaneous-fusion, and ( iii b ) late fusion ones. Ensemble/Selection ( iii a , iii b ): methods falling within this category aim to fuse (or choose between) topological and attribute information after that both CD (for structure) and classic clustering methods (for attributes) are performed. We consider: (i) CSPA (Strehl and Ghosh 2002;Elhadi and Agam 2013), a method that uses a graph representation to solve cluster ensemble, by partitioning an induced similarity graph built on top of the binary similarity matrices extracted from the partitions; (ii) MCLA (Strehl and Ghosh 2002), another graph-based approach, where each partition is represented as a node, then linked to the other ones by considering their similarity; (iii) Selection (Elhadi and Agam 2013), that chooses a preferable partition between a structural and an attributive one (Louvain (Blondel et al. 2008) and K-Means, respectively, in this work); the choice is made by looking at the estimated mixing parameter of the graph: if such a value is less than a certain experimental value lim (i.e., 0.55, in the current study), Louvain is selected, K-Means otherwise; (iv) Late-Fusion (Liu et al. 2020), that combines two partitions (again, a structural and an attributive one) by integrating their adjacency matrices through a linear combination; then, a CD algorithm segments the final induced graph. Modifying Quality Functions ( i a , ii b ): methods falling within this category aim to modify the objective functions of classical CD algorithms by integrating attribute-aware criteria for attributes. We consider: (i) EVA Rossetti 2019, 2020), a Louvain extension that integrates an attribute-aware function (i.e., Purity) for grouping homogeneous communities through a linear combination. It works with categorical and ordinal attributes; (ii) I-Louvain (Combe et al. 2015), a Louvain extension that includes an attributeaware objective function called Inertia; no parameters are involved, but the algorithm works only with continuous attributes; Distance-based ( ii a , i b ): methods falling within this category perform the attribute-aware clustering on a distance matrix obtained by fusing structure and attributes distance functions; common metrics for structure distance are the shortest path lengths or Jaccard similarity. We consider: (i) ANCA (Falih et al. 2017), that selects a set of seeds toward which each nodes characterize their topological and semantic similarity, then computes a distance matrix factorization and runs K-Means over it; we apply the BiCC criteria for seed selection and the shortest path length to compute topological similarity, as suggested in the original paper; (ii) StoC (Baroni et al. 2017), that uses a multi-objective distance to fuse structure and attribute node similarities; the user is assumed to provide a semantic attraction ratio s and a topological one t , to let the method compute from itself a distance threshold extracting -close clusters, i.e., nodes which are within a maximum distance from a given random seed, and a distance length l to define the l-neighborhood of a node; in this work, several values of s and t are selected. CSPA and MCLA were implemented in python 3 ; Late-Fusion 4 , ANCA 5 and EVA 6 implementations are the ones of the original authors; the latter is also available on the CDLib Python library (Rossetti et al. 2019), together with the I-Louvain one. The code of SToC was gently released by the corresponding authors on our requests. X-Mark settings and evaluation We report in Tab. 2 the X-Mark parameter values used for the graphs generation. We leverage the widely adopted (Fortunato and Hric 2016) Normalized Mutual Normalized Information (henceforth, NMI) to compare X-Mark communities to the ones identified by the selected algorithms. NMI is formally defined as in the following: where H(X) is the entropy of the random variable X associated with an algorithm partition, H(Y), the one related to the ground-truth one, and H(X, Y), the joint entropy. NMI ranges in [0, 1], and it is maximized when the algorithm partition and the ground-truth one are identical. Evaluation: ensemble/selection As previously introduced while analyzing the continuous attributes generation, the naıve number of communities subject to the sequence obtained by tuning the parameter (i.e., the structural-component of the ground-truth partition) might not correspond to the naıve number of clusters subject to the attribute value distribution (i.e., the attribute-component one), in particular when the benchmark is instantiated to model well-connected communities that also produce well-separated clusters (i.e., imposing low and values). To test the ensemble algorithms on X-Mark, we define three different case of scenarios, identified as a, b, and csubject to specific m cont values, namely: (i) m cont = [ |C|, |C| ], where | C | is the cardinality of the partition set; we aim to generate as much peaks as the number of graph communities, in order to avoid any issue related to the differences between the structural-and the attributecomponent, i.e., the fact that similar nodes w.r.t. they attributes actually do not correlate with the connections they 1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] establish; to cope with this framework, two solutions are proposed to infer the number of k required by the attributecomponent: a k is the one chosen by the elbow method, that picks the elbow of the curve described by the WCSS values as the number of clusters to use; b: k = |C| , i.e., the number of structural-component communities. (ii) m cont = [2, 4] , where c k is chosen according to the elbow method. The proposed analysis is designed to increasingly resemble real-world scenarios, since the gap between structural-and attribute-components increases from m cont = [ |C|, |C| ] to m cont = [2, 4] , and an only-attribute clustering algorithm can find the cluster cardinality estimation more difficult. In other words, the algorithm performances should decrease when their attribute-component needs to determine the number of clusters k by only looking at attribute information and, contextually, this one does not match with the heavy-tailed topological constraints of the community size sequence. Thus, in the former scenario (i.e., m cont = [ |C|, |C| ] with k chosen according to the WCSS elbow curve), such a gap is flattened, because the attribute domains equal the number of topological communities, i.e., we have a different peak for each graph community. Then, on the same benchmark instance, we test an alternative solution for the estimation of k (i.e., m cont = [ |C|, |C| ]), to observe how the algorithms perform if we use only topological information to determine k. Finally, a more likely real-world scenario generates an attribute-aware planted partition where the attribute domains do not match with the number of communities (i.e., m cont = [2, 4] ) and where an elbow method is used to determine k, because, in real-world contexts, we cannot have information about the real number of graph clusters. Figure 4 shows a selection of the obtained results. The letters above the plots (A, B, C) refer to the three scenarios previously introduced. All the plots report the NMI between the X-Mark ground-truth partitions and the ones obtained by the algorithms, as functions of and parameters. Above each ensemble/selection method (whose results are highlighted in green), we focus on the only-topological and only-attribute algorithmic approaches that each method uses to obtain a consensus partition from their fusion/selection, i.e., Louvain Blondel et al. (2008) (values highlighted in red) and K-means (MacQueen 1967) (in blue). Intuitively, Louvain is only affected by the mixing parameter tuning; conversely, K-means is only affected by the value dispersion due to the standard deviation increase. When the attribute domains equal the number of topological communities (i.e. , Fig 4 a), we also observe partition similarities when is relatively high, contrary to the other two scenarios. Most importantly, the similarity between the benchmark groundtruths and K-means clustering decreases when k is supposed to match the real number of communities (i.e., Fig. 4 b) or in the most likely real-world network simulation (i.e., Fig. 4 c). Briefly, consensus and selection methods depend on both the two output types. Among the consensus methods, the Late-Fusion one seems to perform better than CSPA and MCLA, in particular because the parameter, when is set to 0.5, can tune a better trade-off between the two clustering typologies. The Selection method chooses between a topological-only and an only-attribute algorithm according to the moment when the graph structure is ambiguous. Until a very low value, Louvain is maintained as a clustering choice, then KMeans is selected, but its performances depend on the attribute dispersion tuned by : if the structure is ambiguous and the attributes are clear, the Selection method performs well (and better than a consensus method, since it only uses K-means and not a combination of clustering); however, such achievement is strongly affected by the involved scenario (a or c). Within the LCD context, these approaches work well if the two types of outputs correct each other. Again, observing the Louvain and KMeans NMI in Fig 4 a, we can see how both the methods can recognize the true X-Mark synthetic communities, respectively, when a well-separated structure (low ) and well-separated attributes (low ) are generated; thus, switching from Louvain to K-means in the Selection method gives such a method a similarity continuity (w.r.t. the true communities) from an ambiguous structure to clear attributes. In some sense, since communities from a network point of view do not exist, a classic clustering method is performed. However, the switching from an ambiguous structure to clear attributes gives worse results when more likely real-world scenarios are simulated (Fig 4 c), that is when two well-separated and poorly interconnected dense communities sharing the same majority attribute values exist. Evaluation: modifying quality functions Contrary to ensemble/selection methods, algorithms that modify a topological quality function do not fuse the clustering of two already performed only-topological and only-attributes methods, but they extend an only-topological approach including the attributes into the maximization of a function aiming to find well-connected (and homogeneous) communities. Here, we will focus on EVA and ILouvain, that work, respectively, on categorical and continuous attributes. They do not need to specify a required number of clusters. EVA needs to tune the parameter of the linear combination used to balance the topological and semantic importance when grouping nodes, i.e., the parameter. ILouvain does not need any parameter tuning since its function is normalized to give the same importance to relational and attribute information. Figure 5 shows the NMI between the X-Mark groundtruth partitions and the ones obtained by EVA (Fig 5 a) and ILouvain (Fig 5 b), as functions of and (EVA) or (ILouvain). We test EVA only against a benchmark instances generated with m cat = [2, 4] (results, not showed, with m cat = [ |C|, |C| ] were similar). When = 0 , only the topological function component (i.e., modularity) is optimized, and it is equivalent to run Louvain; when = 1 , only the attribute component (i.e., purity) is optimized, equivalent to cluster the set of the biggest connected components whose nodes share the same label profiling. In the figure, we show results for = [0.5, 0.8, 0.9, 1] : we focus only on values towards the homogeneity optimization to see to what extent the attributes influence clustering. EVA matches the X-Mark communities outperforming its natural baseline, Louvain: when increases, EVA can exploit attribute information to find the homogeneous communities that emerge from the random configuration of links between communities. In other words, a flat surface means that an algorithm focuses only on the attribute information: regarding EVA, this is quite evident when = 1 . A good trade-off is the one able to maintain high NMI when is low, and to not decrease to zero when is high contextually to a low level of attribute noise. Conversely, ILouvain performs poorly on X-Mark. Similarly to the framework proposed for ensemble/selection methods, we tested ILouvain against a benchmark generated using m cont = [ |C|, |C| (Fig 5 b, above), and m cont = [2, 4] (Fig 5 b, below). The obtained results underline that ILouvain is not able to exploit attributes (i.e., NMI equal to 0 for high ). Even fusing the two components (attributes and modularity) does not allow to recognize structurally well-defined clusters (i.e., NMI low for low ); by consequence, ILouvain performs worse than its baseline, Louvain, being possible the case that the ILouvain objective function cannot tune the relative contribution of structure and attributes. Evaluation: Distance-based Finally, we focus on two distance-based methods, ANCA and SToC. Such methods can use both categorical and continuous attributes, which can be exploited even together. We focus only on the two attributes types taken individually, generating X-Mark networks with m cat , m cont = [2, 4] (not showed, results with m cat , m cont = [ |C|, |C| ] were similar). Regarding SToC, the user is allowed to tune some dummy parameters, s , that forces towards attribute similarities, and t , that forces towards topological similarity: we noticed that similar results are achieved testing SToC against the categorical benchmark instance, thus we show only s = t = 0.5 (Fig. 6 a, right, below), that is also one of the setting parameter solution proposed in the reference paper (Baroni et al. 2017); for the continuous attributes, instead, we tested SToC also both with s = 0.2 , t = 0.8 , performing a topological clustering, and s = 0.8 , t = 0.2 , a more attribute-aware one. As we can observe from Fig. 6 b, ANCA performs relatively worse than the other approaches, particularly if compared with the ensemble/selection methods, or EVA. The trend of the ANCA 3D plots appears reasonable, but (i) the NMI decreases only as function of , suggesting that only the topological component is taken into account for the clustering task, and (ii) maximal NMI values are lower than the ensemble/selection methods or EVA. Similarly, the trend of the SToC 3D plots are reasonable, but (i) it resembles a flat surface (particularly, while clustering categorical attributes, Fig. 6 a, below, right), suggesting that only the attribute component is taken into account for the clustering task (as we already saw for EVA when its parameter is equal to 1), and (ii), again, the maximal NMI values are lower than other methods. SToC performances are better while clustering continuous attributes, when the discovery of communities is forced towards the topological component (Fig. 6 a, above, left), but it decreases for other parameter settings, suggesting that the algorithm is, in some sense, confounded by the attribute component of the graph. Discussion and conclusion In this work, we proposed a solution for evaluating labeled community discovery (LCD) algorithms. Thus, we modeled X-Mark, a synthetic tool for generating node-attributed networks with planted communities. Extending some already existent intuitions for the generation of only topologicalbased benchmarks (e.g., LFR (Lancichinetti et al. 2008)), X-Mark firstly generates both the community size and degree distribution, then use them to associate each node to a partition. Label-homogeneity within communities is controlled by the probability to have within each community a userdefined percentage of similar nodes, encoded in a noise parameter for categorical attributes, and the community standard deviation for continuous ones. Once inserted each node into its preferable community, the edge rewiring automatically generates assortative patterns within communities, contributing to the homophilic network behavior. We guarantee community homogeneity and network homophily, resembling scenarios for simulating node-attributed realworld network representations. Indeed, several lines of discussion span from X-Mark, among them: (i) how to exploit the X-Mark ability to specify different structures and attribute combinations (e.g., clear structure vs. clear attributes or clear structure vs. noisy attributes), and, generally, (ii) how to fairly compare the quality of clustering testing the algorithms against synthetic benchmarks. Firstly, we designed our model to be as general as possible, leaving the analyst to specify how to combine different structure and attribute combinations. Analyzing the algorithm performances as functions of the whole range of structure and attribute parameter values allowed us to have a broad vision of how algorithms perform. Nevertheless, as well remarked by several discussions Chunaev 2020;Chunaev et al. 2020), a strong rationale behind many of LCD approaches is often assumed by the researchers: the algorithms can exploit nodes' attributes in the CD task because homophily strongly contributes to community formation. In other words, since node similarities match with the connections they made, it is useful to consider such similarities while grouping closer nodes. Nevertheless, it is intuitive to think that some attributes might match with the node connections, while others are independent from the relational realm of a dataset (see Peel et al. 2017;Newman and Clauset 2016). X-Mark can model situations where attributes align/not align to topology. In the future, we plan to extend our tests to LCD algorithms that explicitly exploit attribute information by looking at the combination of clear/noisy structures and clear/ noisy attributes. Moreover, we plan to test LCD algorithms against different attribute-aware benchmarks to see if other external comparison methods can lead to different results. Being based on the same algorithmic schema of LFR, we can also plan to extend X-Mark to cope with overlapping communities, as well as weighted and directed networks, as done for the classic LFR extension (Lancichinetti and Fortunato 2009). Dealing with such task variants and different representations is not trivial in the presence of node metadata. Since a benchmark aims to resemble real-world scenarios, we also need more investigations into real-world weighted or directed node-attributed networks. The actual lack of a large corpus of studies in this direction makes it more difficult to find valuable solutions for these extensions. Attribute-aware CD, which identifies well-connected and label-homogeneous nodes, is a rising theme in complex network analysis. We are far away from reaching standard procedures for handling attribute information embedded in the nodes as well as evaluating different clustering outputs. We aimed to take some first steps towards a more careful evaluation analysis of attribute-aware CD algorithms, as recently provided only in Vieira et al. (2020). Based on the present findings, thanks to X-Mark, we can evaluate algorithms performances within a controlled environment, i.e., adopting systematic tuning parameters strategies. Among others, we observed that ensemble clustering methods can suffer the selection of the best k number of communities, while algorithms modifying only-structure quality functions can outperform their only-structure baseline only when the new fitness function is well defined.
9,837.8
2021-10-15T00:00:00.000
[ "Computer Science", "Mathematics" ]
BMS Flux Algebra in Celestial Holography Starting from gravity in asymptotically flat spacetime, the BMS momentum fluxes are constructed. These are non-local expressions of the solution space living on the celestial Riemann surface. They transform in the coadjoint representation of the extended BMS group and correspond to Virasoro primaries under the action of bulk superrotations. The relation between the BMS momentum fluxes and celestial CFT operators is then established: the supermomentum flux is related to the supertranslation operator and the super angular momentum flux is linked to the stress-energy tensor of the celestial CFT. The transformation under the action of asymptotic symmetries and the OPEs of the celestial CFT currents are deduced from the BMS flux algebra. Introduction Celestial holography aims at establishing a holographic description of quantum gravity in fourdimensional asymptotically flat spacetime in terms of a two-dimensional conformal field theory, called celestial CFT (or CCFT for short), living on the boundary celestial Riemann surface. This program exploits the richness of the asymptotic symmetry structure of the spacetime [1][2][3][4][5][6] to constrain the potential candidate for the celestial dual theory. Conformal symmetries of the CCFT are induced by superrotations which are part of the (extended) Bondi-Metzner-Sachs (BMS) asymptotic symmetries in the bulk theory [5,[7][8][9][10]. In celestial holography, each scattering particle in the bulk spacetime is associated to an operator that lives on the boundary celestial Riemann surface. In those terms, soft theorems in the bulk spacetime, corresponding to Ward identities of the gravitational S-matrix for the extended BMS symmetries, are implemented by 2d currents in the CCFT; see [11] for a review. In particular, the supertranslation current has Ward identities that are equivalent to the leading soft graviton theorem [6,12], while the sub-leading soft theorem is obtained by the insertion of a holographic stress-tensor [13]. A natural basis to describe massless asymptotic particles in celestial holography can be obtained by applying a Mellin transform with respect to the energy of the external particle, which maps energy eigenstates to boost eigenstates and hence makes the conformal properties more manifest [4,[14][15][16][17]. In [18], the coadjoint representation of the BMS group in four dimensions has been constructed. It acts on a set of conformal fields that have been identified with local expressions of the solution space of non-radiative asymptotically flat spacetimes at null infinity through a (pre-)momentum map. In presence of radiation, the transformation of the gravitational solution space becomes more complicated and the coadjoint representation of the BMS group is not sufficient to describe it. Furthermore, the BMS surface charges become non-integrable [19] and one needs additional inputs to select a meaningful integrable part. The algebra requires the use the modified Barnich-Troessaert bracket [20], leading to a field-dependent 2-cocycle (see also [21] for a detailed analysis from double-soft limits of amplitudes). As shown in [22,23], the latter can be re-absorbed in the definition of the modified bracket by using the Noetherian split between integrable and non-integrable parts. Instead of working with local expressions of the solution space at finite value of retarded time u, one could consider fluxes that correspond to integrated expressions over u. This point of view is closer to the spirit of celestial holography where the retarded time does not appear explicitly in the CCFT. It was shown in [24,25] that, provided one chooses an appropriate integrable part in the BMS surface charges, the algebra of associated BMS fluxes closes under the standard bracket. The prescription that we consider here is based on [25,26] and has the following important properties: (i) the flux algebra closes under the standard bracket, (ii) the fluxes vanish when evaluated on vacuum solutions (namely solutions with identically vanishing Riemann tensor that are constructed in [27][28][29]), (iii) the fluxes vanish for non-radiative spacetime solutions, (iv) the fluxes are finite provided one chooses the appropriate falloffs in u. In this paper, we identify some non-local combinations of the solution space of fourdimensional asymptotically flat spacetimes that transform in the coadjoint representation of the extended BMS group in presence of radiation. We call these expressions the BMS momentum fluxes since they are involved in the BMS fluxes discussed in [25]. The inclusion of the superrotations in the analysis requires a meticulous treatment of the 2d Liouville stress-tensor discussed in [26,27]. In a second step, we propose a new prescription to split the fluxes into soft and hard parts, so that the associated soft and hard phase spaces factorize. We then relate the soft BMS momentum fluxes with the supertranslation operator and the stress-tensor of the CCFT. We provide the precise expressions of these CCFT currents in terms of the bulk metric and deduce their transformation laws under extended BMS transformations. Finally, from the BMS flux algebra, we deduce the OPEs of the BMS momentum fluxes and recover the OPEs of the CCFT operators. Asymptotically flat spacetimes In this Section, we describe the bulk side of the celestial holographic description by reviewing the analysis of four-dimensional asymptotically flat spacetimes at null infinity, denoted I + , in Bondi gauge [1][2][3]; see [30,31] for an intrinsic conformally invariant geometrical description of null infinity. We mainly follow the notations and conventions of [5,26]. In Bondi coordinates (u, r, x A ), x A = (z,z), the spacetime metric reads as where β, V , g AB , U A are functions of (u, r, x A ) and the transverse metric g AB satisfies the determinant condition We consider the asymptotically flat spacetimes satisfying the boundary conditions is unit sphere metric and C AB (u, x) is a 2-dimensional symmetric traceless tensor called the asymptotic shear. Let us make some comments on the choice of falloffs (2.3), (2.4): • We allow for possible puncture singular violations of the above boundary conditions to accommodate with the Witt ⊕ Witt superrotations symmetries [5,7,8] that we discuss below. In particular, we consider the topology of the 2-punctured sphere as celestial Riemann surface S ≃ I + /R [18]. • Possible relaxations of the above boundary conditions have been considered recently in the literature allowing for variations of the transverse boundary metric. These lead to enhancement of the asymptotic group with Diff(S 2 ) superrotations [26,[32][33][34] and/or Weyl rescaling symmetries [5,22,35,36] (see also [37] for a review). While the case that we discuss here is the most natural to study the celestial holography since it readily implies the conformal symmetries on the celestial Riemann surface, we will comment on these extensions in the discussion section. • In the above conditions, we set the order r 0 in the expansion of g AB to zero. Turning on this term would bring some log r terms in the expansion that we want to avoid [5,38,39]. For discussions on polyhomogeneous spacetimes, see e.g. [40][41][42][43][44][45][46][47]. Solving Einstein's equations in vaccuum with vanishing cosmological constant for the boundary conditions (2.3) yields the following expansions [5,38]: is the angular momentum aspect. The 2-sphere indices in (2.5) are lowered an raised withq AB and its inverse, and D A is the Levi-Civita connection on the celestial Riemann surface associated toq AB . The Bondi mass and angular momentum aspects satisfy the time evolution equations 6) with N AB = ∂ u C AB the Bondi news tensor. The residual diffeomorphisms that preserve the Bondi gauge (2.1) and falloff conditions (2.3) are generated by vectors fields ξ = ξ u ∂ u + ξ z ∂ + ξz∂ + ξ r ∂ r whose components read as where T = T (z,z) is the supertranslation parameter and Y = Y(z),Ȳ =Ȳ(z) are the superrotation parameters satisfying the conformal Killing equation where the last two terms take into account the field-dependence of the asymptotic Killing vectors (2.7) at subleading order in r [5,48], the asymptotic Killing vectors (2.7) satisfy the commutation relations with where c.c. stands for complex conjugate terms. This corresponds to the extended BMS algebra, namely where s * stands for the (possibly singular) supertranslations. For convenience, we introduce the notations f = (Ω SΩS ) − 1 2 T + u 2 (D z Y + DzȲ) and Y A = (Y,Ȳ). Under residual gauge diffeomorphisms (2.7), the solution space transforms infinitesimally as , (2.12) As one can see from the second expression above, the Bondi news N AB transforms inhomogeneously under superrotations. As discussed in [26], one can define the physical newsN AB asN where T F stands for the trace-free part. N vac AB is the trace-free part of the stress-tensor for a 2d Euclidean Liouville theory living on the celestial Riemann surface with Lagrangian The Liouville scalar field Φ is called the "superboost field" and it encodes the refraction/velocity kick memory effects [26]. It satisfies the equation of motion Φ =R = 2Ω SΩS ∂∂ ln(Ω SΩS ) = 2 (2.16) and transforms as . As a consequence of (2.16) and (2.17), the Liouville stress-tensor (2.14) satisfies D A N vac AB = 0 and transforms as One can show that it is related to the trace-free part of the Geroch tensor ρ AB [24,[49][50][51]. The interest of the physical news (2.13) is that it transforms homogeneously, i.e. so thatN AB = 0 is a meaningful condition to impose in presence of superrotations to define non-radiative spacetimes. In addition to the boundary conditions (2.3), one also imposes the following falloff conditions when u → ±∞ that are compatible with the action of superrotations [24][25][26]52]: where C ± correspond to the values of the supertranslation field at I + ± that encodes the displacement memory effect [53]. We have the transformation As discussed in [24,52], the falloffs (2.20) are stronger than those considered in e.g. [26], but we found that they are necessary for the finiteness of the flux related to superrotations that we will introduce in Section 4. The falloffs (2.20) imply that, at the corners I + ± , the spacetime is non-radiative (N AB | I + ± = 0) and the physical asymptotic shear defined byĈ AB = C AB − uN vac AB is purely electric, i.e. One can check that this condition is preserved under BMS transformations. It generalizes the standard electricity condition considered e.g. in [6] in presence of superrotations [24][25][26]. Conformal fields on the celestial Riemann surface We now set up the stage for the boundary side of the celestial holography framework. From the previous section, we infer that the celestial Riemannian surface S ≃ I + /R can be taken as the 2-punctured Riemann sphere endowed with the fixed Euclidian metric (2.4) [18]. It is convenient to complexify S and treat the coordinates z andz independently. From the boundary point of view, one can consider the "extended conformal transformations" that preserve the conformal class of the metric (2.4). They are defined as the combined action of conformal coordinate transformations z ′ = z ′ (z) andz ′ =z ′ (z) and Weyl rescalings that induce the following transformations on the conformal factor: where E R (z,z) is the real Weyl rescaling parameter. 3 These transformations preserve the particular representative (2.4) of the conformal class provided Infinitesimally, the extended conformal transformations (3.1) satisfying (3.2) are generated by the conformal Killing vectors Y(z)∂ +Ȳ(z)∂ induced on S from the bulk superrotations defined in (2.7). A conformal field of weights (h,h) is defined as a field φ h,h (x) on S which transforms as under transformations (3.1) with the constraint (3.2). One can then define a spin weight J and a boost weight/conformal dimension ∆ as usual: As explained in [18,35], there is a one-to-one map between conformal fields and weighted scalars with spin weight s = J and boost weight w = −∆. The weighted scalar point of view is the one that naturally arises when starting from the solution space of gravity [54][55][56]. However, it can be easily related to the conformal field point of view by using the conformal factor of the metric. In the present paper, we choose to work in the latter framework which is more adapted to celestial holography. It will turn out to be useful to introduce the following derivative operators: which act on (h,h) conformal fields to give conformal fields of weights (h + 1,h) and (h,h + 1), respectively. They satisfy D(Ω SΩS ) = 0 =D(Ω SΩS ). These operators coincide with the Weyl covariant derivative operators introduced in [18] for the framework that we are considering here; i.e. with fixed representative (2.4). In particular, the Liouville field introduced in (2.14) naturally arises as part of the Weyl connection. Furthermore, the operators (3.5) also correspond to the Witt ⊕ Witt version of the Diff(S 2 )-covariant derivative introduced in [24] (see also [52]). One can check that We assume that the conformal fields can be expanded in formal series as where the coefficients a k,l ∈ C satisfy appropriate conditions [57,58]. If h,h are integers (resp. half integers), then k, l are taken to be integers (resp. half integers). The residues of φ h,h (z,z) with respect to z andz are defined as From the residue theorem, we have the fundamental relations where C is a contour around the puncture. Notice that total derivative terms can be discarded in the contour integrals since there is no log z terms in the expansion (3.7). We use the notation 2iπ to designate the integral over the celestial Riemann surface S. 4 The expansion (3.7) is inverted by the relation (3.10) 4 Notice that the standard normalization for the measure on the celestial sphere is such that when using stereographic coordinates z = cot θ 2 e −iφ ,z = cot θ 2 e iφ . The formal Dirac delta-functions are defined as the following formal distributions [57,58]: , together with the corresponding relations for δ(z −w). We write the delta-function on the celestial Riemann surface as Generators and momenta We now identify the parameters of the extended BMS algebra and some non-local combinations of the solution space introduced in Section 2 from bulk considerations, as conformal fields on the celestial Riemann surface in the sense of Section 3. The conformal weights of the various fields discussed in this paper are summarized in Table 1. Let us start with the bms 4 symmetry parameter given in Equation (2.7). 5 The supertranslation parameters T (z,z) can be seen as real conformal fields of weights (− 1 2 , − 1 2 ) [18]. 6 They can be expanded in formal series as in (3 .7): with k, l half-integers andt k,l = t l,k so that T is real. Notice that the four Poincaré translations are spanned by T1 Superrotations are parametrized by the complex conformal fields Y(z),Ȳ(z) of weights (−1, 0) and (0, −1), respectively [18]. The infinitesimal actions of superrotations on a conformal field φ h,h (z,z) are given by These are the infinitesimal analogues of (3.3). Superrotation vector fields can be expanded as in (3.7): 3) 5 In this paper, since we are focusing on the conformal field point of view, we do not write the "˜" notation above the conformal fields, which contrasts with the convention used in [18]. 6 The choice of conformal weights for the symmetry parameters will be justified later through the pairing between generators and momenta in (5.1). where m ∈ Z and y m ,ȳ m are complex numbers. The six global Lorentz parameters are spanned by Y −1 , Y 0 , Y 1 , and their complex conjugates. In terms of (4.3), equation (4.2) can be rewritten as or, equivalently, For convenience, the bms 4 commutation relations (2.9) and (2.10) can be rewritten using the In terms of expansions (4.1) and (4.3), these commutation relations become [5,59] [ (4.7) Let us now define the BMS momentum fluxes as particular non-local combinations of the solution space data of asymptotically flat spacetimes appearing in (2.5) and interpret them as conformal fields on the celestial Riemann surface S. First, the supermomentum flux P(z,z) is defined by which corresponds to the difference of values of the supermomentum M(u, z,z) between the two non-radiative asymptotic regions I + ± . Here we used the prescription of [25,26] to define the supermomentum in terms of the solution space. The supermomentum flux P(z,z) can be seen as a (J = 0) conformal field of weights ( 3 2 , 3 2 ). Indeed, under infinitesimal BMS transformations, one can deduce from (2.12) and (2.20) that which is the expected infinitesimal transformation law for a ( 3 2 , 3 2 ) conformal field (see (4.2)). For later purposes, it is useful to split the supermomentum flux into soft and hard parts involving linear, respectively quadratic, terms inN AB inside the integral. We prescribe P = P sof t + P hard (4.10) with (4.11) Notice that the fluxes of supermomenta defined as in (4.11) are finite in u and vanish in stationary configurations whereN AB = 0, which is a desirable physical requirement [19,26]. One can show that the soft and the hard parts transform separately as in (4.9), namely They can therefore be both interpreted as conformal fields of weights ( 3 2 , 3 2 ), which justifies the specific split between hard and soft parts in (4.11). In terms of the superrotation-covariant derivative operators introduced in (3.5), the soft part can be elegantly rewritten as where the leading soft mode of the news tensor N (0) (z,z) is a ( 3 2 , − 1 2 ) conformal field. Moreover, using the electricity condition (2.22) encoded in the falloffs (2.20), we have D 2N (0) =D 2 N (0) and with ∆C = C + − C − the difference of the supertranslation field between the future and past corners of I + . Second, the super angular momentum flux is parametrized by the equivalence classes [J ] and [J ] for the equivalence relation [18] J ≡ J + DL,J ≡J +DL. (4.15) These equivalence classes can be seen as complex conformal fields of weights (1, 2) and (2, 1), respectively. In terms of the gravitational data, we define which corresponds to the difference of values between I + ± of the super angular momentum N (u, z,z) defined by (4.17) We have the analogous complex conjugate relations forJ (z,z) andN (u, z,z). Here we used a prescription based on [25,26,60] to define the super angular momentum. 7 Under infinitesimal BMS transformations acting through (2.12), the super angular momentum flux transforms as together with the complex conjugate relation forJ . For future purposes, it is useful to split the super angular momentum flux into soft and hard parts; we propose with and the complex conjugate relations forJ sof t andJ hard . Notice that the fluxes of super angular momenta (4.20) are finite thanks to the stronger u-falloffs that were taken (2.20) and vanish in stationary configurations whereN AB = 0 [19,26]. One can show that the soft and the hard parts transform separately as in (4.18), namely which implies that J sof t and J hard (J sof t andJ hard ) can be seen separately as conformal fields of weights (1, 2) (respectively (2, 1)). The transformations (4.21) justify the specific choice of split prescribed between soft and hard parts in (4.20). In particular, the terms involving the supertranslation field C − at I + − ensure that the expressions transform as they should under supertranslations. 8 In terms of the derivative operators (3.5), the soft part can be rewritten as 7 The prescription (4.17) for the super angular momentum differs from the one proposed in [25] by magnetic contributions of the shear that do not play any role at I + ± , but that allow to have vanishing fluxes for stationary solutionsN AB = 0 (see [61] for a detailed discussion). 8 One could have replaced C − by C + in the expressions (4.20) without affecting the result (4.21). where the subleading soft mode of the news tensor N (1) (z,z) is a (1, −1) conformal field and C (z,z) is a (− 1 2 , − 1 2 ) conformal field. As already mentioned, the prescription for the BMS momenta (4.8) and (4.17) that we are using here have all the desired properties, including finiteness in u, vanishing for non-radiative solutions, vanishing for vacuum configurations and closure under the standard bracket when considering the integrated fluxes over I + (see Section 5). Concerning the splits between soft and hard parts in (4.11) and (4.20), they differ from those originally proposed in [6,9,12] (see also [11]) by terms involving the memory fields ϕ(z),φ(z) and C − that label the vacuum degeneracy [27][28][29]. When setting ϕ = 0 =φ (N vac AB = 0) and C − = 0, we consistently recover the standard expressions. The additional terms that we have allow us to obtain better transformation laws (4.12) and (4.21) under the action of BMS symmetries. As we will see in Section 6, this will be of major importance to identify the CCFT operators in the solution space of gravity that obey the desired constraints and transformation properties. BMS flux algebra When using covariant phase space methods [62][63][64][65], BMS surface charges are non-integrable due to the presence of radiation [19,20]. Defining meaningful finite charges requires to impose additional criteria to isolate a specific integrable part (see e.g. [22,23,25,26,[66][67][68] for recent proposals of such criteria and [69,70] for the implication of the various expressions on observational data). Here, we follow the prescription of [25,60,61] (see also footnote 7) to select the integrable part and define the "finite" charges. The fluxes are then obtained by expressing the finite surface charge integrals as volume integrals over I + . In terms of the generators and flux of momenta that were introduced in Section 4, the BMS fluxes read as 9 Table 1. The expression (5.1) can be seen as a pairing ·, · between the BMS generators of the algebra and the momentum fluxes: where bms * 4 denotes the dual of bms 4 . Indeed, it is linear in both entries and non-degeneracy comes from the fact that we have considered equivalence classes [J ] and [J ] of super angular momentum fluxes (4.15). Using this pairing, the transformations laws (4.9) and (4.18) can be interpreted as the coadjoint representation of bms 4 [18]. In particular, it has been shown in that reference that there exists a (pre-)momentum map between the solution space space of non-radiative asymptotically flat spacetimes and the dual of the global BMS algebra so(3, 1) + s. Here, we have extended these results for radiative spacetimes and for extended BMS algebra (Witt ⊕ Witt) + s * by considering the fluxes on I + , which are u-integrated expressions in terms of the solution space, rather than surface charges. These results rely crucially on the falloff conditions (2.20) at the corners of I + and the fact that the BMS fluxes are determined by the values of the surface charges at the corners. Now, using the basis dual to the one used for the expansion of the generators in (4.1) and (4.3) [18], it is instructive to expand the BMS momentum fluxes as in (3 .7): withp k,l = p l,k so that P is real. In terms of the above expansions, the infinitesimal variations (4.9) and (4.18) are encoded in the coadjoint representation of bms 4 , written ad * , as follows [18]: Similarly, the soft/hard BMS fluxes play the role of pairing for the soft/hard sectors. As discussed in Section 4, since the soft and hard parts of the momentum fluxes transform separately as (4.9) and (4.18) (see (4.12) and (4.21)), they also transform in the coadjoint representation of bms 4 for the appropriate pairing (5.5). We define the bracket between the BMS fluxes as As shown in [25], the BMS fluxes (5.1) satisfy the algebra which implies that the bracket (5.6) corresponds to the Kirillov-Kostant Poisson bracket on bms * 4 . In terms of the momentum fluxes, the bracket (5.6) can be written explicitly as {P(z,z), P(w,w)} = 0, (5.8) together with the complex conjugate relations. In particular, these relations reproduce the desired variations (4.9) and (4.18). Similarly, the flux algebra (5.7) can be written in terms of the momentum fluxes as {P(z,z), P(w,w)} = 0, (5.9) together with the complex conjugate relations. The bracket that has been considered up to this point is associated with the total BMS flux (5.1). However, as discussed around equation (5.5) above, one could study the soft/hard sectors separately and consider the appropriate induced bracket on each of them. More explicitly, assuming that the soft and hard sectors factorize [52], we have where the second relation is straightforwardly obtained by using the first of (5.10), the definition of the bracket (5.6) and the results (4.12) and (4.21). Henceforth, (5.8) and (5.9) can be specified to soft/hard sectors separately. In the following, since we want to relate the BMS flux algebra with the CCFT currents, we will focus on the soft sector. Momentum fluxes and CCFT operators One of the starting points of celestial holography was the remarkable observation that Weinberg's leading soft graviton theorem could be reformulated as the Ward identity arising from the insertion of a ( 3 2 , 1 2 ) Kac-Moody current P (z,z), called the supertranslation operator [6,12]. Similarly, it was later shown that the subleading soft graviton theorem [71] could be rewritten as an insertion of a (2, 0) operator T (z), identified as the stress-tensor of the celestial CFT, reproducing the Ward identity of a 2d CFT [13]. Although the supertranslation operator P (z,z) and the stress-tensor T (z) play a fundamental role in celestial holography, their precise relation to the bulk solution space in presence of superrotations (with the inclusion of N vac AB for consistency of the phase space) and their transformation properties under the extended BMS symmetries have not yet been explicitly worked out. In this Section, we explore these aspects and relate the BMS momentum fluxes introduced above with these CCFT operators. The supertranslation operator P (z,z) and its complex conjugateP (z,z) of weights ( 3 2 , 1 2 ) and ( 1 2 , 3 2 ), respectively, can be related to the soft supermomentum flux P sof t (z,z) as P sof t (z,z) =DP (z,z) + DP (z,z), (6.1) where the derivative operators D andD are defined in (3.5). From (4.11) and (6.1), one deduces the explicit expression of P (z,z) andP (z,z) in terms of the bulk metric: (Ω SΩS ) together with the complex conjugate expression forP (z,z). To obtain the last equality, we used the falloffs (2.20). Comparing with (4.14), one can rewrite (6.1) as P sof t (z,z) = 2DP (z,z) = 2DP (z,z), which is a direct consequence of the electricity condition (2.22). Notice that the expression of the supertranslation operator (6.2) that we are using is compatible with the one initially proposed in [6,12] when setting N vac AB = 0. The additional terms that we have allow us to have nicer transformation laws under the action of BMS symmetries. In particular, the supertranslation operator is an actual Virasoro primary rather then a descendent thanks to the use of the derivative operators (3.5). Indeed, an explicit computation gives or, equivalently, {J sof t (z,z), P (w,w)} = δ 2 (z − w)∂ w P (w,w) + 3 2 ∂ w δ 2 (z − w)P (w,w), {J sof t (z,z), P (w,w)} = δ 2 (z − w)∂wP (w,w) + 1 2 ∂wδ 2 (z − w)P (w,w). (6.5) We have the analogous results forP . The stress-tensor of the CCFT is encoded in the complex conformal fields T (z) andT (z) of weights (2, 0) and (0, 2), respectively. We observe that it can be constructed from the super angular momentum flux introduced in Section 4 as Since total derivatives can be dropped out, the definition (6.6) does not depend on the particular representatives of J sof t (z,z) andJ sof t (z,z) in the equivalent classes (4.15). The stress-tensor is related to the soft part of the flux for superrotations in (5.5) through and the complex conjugate relation for F sof t Y . One can recognize the last expression in (6.7) as the soft part of the superrotation charge [9,13]. From (4.20) and (6.6), one deduces the explicit expression of T in terms of the bulk metric: We have the complex conjugate expression for the anti-holomorphicT (z). The relation (6.8) agrees with the one first proposed in [13] when setting N vac AB = 0 = C − . The decoration with the terms involving the memory fields is turned on once we are considering a vacuum that is not global Minkowski space [27][28][29] and allows us to have nicer transformation laws. An explicit computation shows that or, equivalently, {J sof t (z,z), T (w)} = 0. (6.10) We again have the analogous results forT . Constraints on CCFT Up to this stage, all the results have been obtained from gravitational bulk computations. Some non-local combinations of the solution space have been identified as conformal fields on the celestial Riemann surface whose transformation laws are induced by bulk diffeomorphisms. In this construction, a phase space structure has emerged naturally from the BMS flux algebra. We now study the implications of these results at the quantum level and derive the OPEs between the various conformal operators. Using standard arguments [58], one can deduce the singular parts of the OPEs between the operators associated with BMS momentum fluxes by starting from their commutation relations (5.8). We have explicitlȳ ∂ w P(w,w), P(z,z)P(w,w) ∼ 0, together with the complex conjugate relations. The notation "∼" means equality modulo expressions that are regular as (z,z) → (w,w). The third OPE in (7.1) can be deduced from the fourth one using P(z,z)J (w,w) =J (w,w)P(z,z). We will avoid writing redundant OPEs in the following. As a consequence of the factorization between hard and soft sectors in the phase space discussed at the end of Section 5, the above OPEs can be written for hard and soft BMS momentum fluxes separately. From now on, since we want to find constraints on the CCFT from celestial currents, we will restrict ourselves to the soft sector. The OPEs between soft BMS momentum fluxes and the CCFT operators can be readily deduced from (6.5) and (6.10), leading to: ∂wP (w,w), Finally, using the commutation relations (6.11), one obtains the OPEs between CCFT operators: P (z,z)P (w,w) ∼ 0, P (z,z)P (w,w) ∼ 0, These results are compatible with those given in [59]. They also match with the OPEs found in [10] that were derived from collinear and conformally soft limits of amplitudes, up to the fact that the supertranslation operator that we are considering here is a Virasoro primary rather than a descendant. Let us now elaborate more on the constraints involving the celestial CFT operators and momentum flux operators with generic conformal operators. To simplify the discussion, we set the memory fields to zero, i.e. ϕ = 0 =φ (N vac AB = 0) and C − = 0. In celestial holography, a massless particle of energy ω involved in a scattering process in 4d flat space is associated to an operator O(ω, z,z) (which can depend on other quantum numbers, which are omitted in this notation), where (z,z) labels the point on the celestial sphere where the particle exits (or enters) spacetime [11,72]. Instead of working in the usual momentum basis, a promising celestial dictionary involves the Mellin representation [14][15][16] O h,h (z,z) = ∞ 0 dω ω ∆−1 O(ω, z,z), (7.4) which trades the energy ω for the conformal dimension ∆ = h +h. Celestial operators (7.4) indeed enjoy the property of transforming as 2d quasi-primaries. The so-called conformally soft limits [73], for which the conformal dimension takes specific values, lead to 2d currents on the CCFT; see e.g. [73][74][75][76][77][78][79][80][81][82][83]. It was shown that the OPEs involving the components of the CCFT stress-tensor 10 T (z), T (z) and celestial operators O h,h representing gauge bosons and gravitons are given by ∂wO h,h (w,w), (7.5) which implies that these operators are Virasoro primaries. These expressions were derived from collinear and conformally soft limits of Einstein-Yang-Mills amplitudes in [10,84]. We deduce from (7.5) that OPEs involving the super angular momentum flux are of the following form: ∂wO h,h (w,w). While superrotations lead to the expected expressions (7.5) in a CFT, it is a notorious fact in celestial holography that supertranslation symmetry is more subtle to deal with. It particular, it has been shown that the insertion of the supertranslation operator P (z,z) into a celestial correlation function gives [10,73]: This OPE relationship is nothing but the celestial consequence of Weinberg's leading soft graviton theorem, as it can be readily obtained from a Mellin transform of the Ward identity associated to supertranslation symmetry [6,12]. As one can see from (7.7), the action of supertranslations (even global ones) leads to a shift of ( 1 2 , 1 2 ) in the conformal weights of celestial operators. Now one can deduce 11 the generic form of the OPE between the supermomentum flux operator P(z,z) and a celestial operator O h,h (w,w): O h+ 1 2 ,h+ 1 2 (w,w). (7.8) This formula agrees with the expression found in [10], which was obtained by taking successive commutators involving the zero-mode of P and the stress-tensor. Discussion We now conclude by discussing some possible extensions and consequences of the results presented in this work. Surface charges versus fluxes We have argued that the fluxes are more natural objects from the point of view of the CCFT than the surface charges at fixed value of the retarded time u. In particular, the fluxes are completely determined by the values of the surface charges at the corners I + ± of null infinity, which are non-radiative regions of the spacetime. The statement of the closure of the flux algebra (5.7) can then be recast as the closure of the surface charge algebra at I + ± , without the need of modified bracket or the appearance of 2-cocycle. This echoes with recent works suggesting that symmetries are encoded at the corners of hypersurfaces [68,[86][87][88][89][90][91]. The price to pay for considering fluxes at I + instead of surface charges is that we lose information on the local flux-balance laws such as the Bondi mass loss formula. Henceforth, both point of views are complementary: the integrated fluxes describe the state of a system, while the surface charges point of view provides information on its dynamics. Central charge in the CCFT In the AdS 3 /CFT 2 correspondence, the central charge of the boundary CFT 2 can be read from the Brown-Henneaux central extension [92]. The latter appears classically by computing the charge algebra of large diffeomorphisms in asymptotically AdS 3 spacetimes. One might expect that a similar feature would hold in the present context, namely that the possible CCFT 2 central extension appears in the classical bulk computation of the charge algebra. However, as stated in (5.7), the BMS flux algebra closes under the standard Peierls bracket and does not exhibit a central term. This indicates that at least the gardenvariety type of central charge of the CCFT 2 vanishes, which is in agreement with the results found in [10] from computing the T T OPE (see also (7.3)). Let us notice that there is still an imprint of central extension in the transformation of the Liouville stress-tensor that exhibits a Schwarzian derivative (see equation (2.18)). From extended to generalized BMS In this paper, we have considered gravity in asymptotically flat spacetimes with a fixed boundary structure. Allowing for some singular punctures on the celestial Riemann surface enables to include the whole Witt ⊕ Witt superrotations in the asymptotic symmetry algebra, leading to the extended BMS algebra [5,7,8]. If fluctuations of the transverse boundary metric q AB dx A dx B are permitted on the phase space (but keeping a fixed determinant √ q = √q ), one gets instead the generalized BMS algebra Diff(S 2 )+s where the superrotations Diff(S 2 ) are smooth diffeomorphisms on the celestial sphere [26,[32][33][34]. Note that these smooth superrotations can be extended to all diffeomorphisms with isolated singularities, written Diff(S 2 ) * , if the celestial sphere admits some punctures. The latter singular extension is relevant when considering Ward identities of the S-matrix and the relation with subleading soft graviton theorems [9,32,33,77]. In particular, we have Witt ⊕ Witt ⊂ Diff(S 2 ) * , which implies that a notion of conformal field can still be defined: a conformal field φ h,h (z,z) transforms as in where the superrotation generators Y = Y(z,z) are now (possibly singular) functions of (z,z) on the celestial Riemann surface. Consequently, since there is no constrain on Y, one does not need to quotient the super angular momentum fluxes J (z,z) andJ (z,z) by the equivalence relation (4.15). In this generalized BMS case, the precise expressions of P, J andJ in terms of the solution space and their split into hard/soft sectors can be deduced from the analysis displayed in e.g. [24][25][26]. They should be such that (4.9), (4.12), (4.18) and (4.21) still hold, so that the results of Section 5 concerning the BMS flux algebra remain valid. As argued in [77], a natural object in this context is the shadow transform 12 of the stresstensor, which leads to an operator T of weights (−1, 1): It has the following OPE 13 : Using the observations relating the shadow stress-tensor with the soft part of the superrotation charge that were made in [17,77], (8.2) can be identified (up to a factor) with F gen,sof t Y= (z−w) 2 (z−w) . It would be interesting to explore the shadow supermomenta and super angular momenta, which are operators of weights (− 1 2 , − 1 2 ) and (0, −1). More generally, it would be enlightening to have a full control of shadow transformations from the bulk gravitational phase space point of view. From generalized to Weyl BMS A new enhancement of the generalized BMS symmetries has been found recently by allowing the determinant √ q of the boundary metric to fluctuate on the phase space [22]. This asymptotic symmetry algebra contains the Weyl rescaling symmetries discussed in [5,35,36] and has the following semi-direct structure: [Diff(S 2 ) + Weyl] + s. (8.4) It was shown that the charges and fluxes associated with Weyl rescaling symmetries were nonvanishing, which suggests the presence of an additional term in the flux (8.1). It would be interesting to repeat the analysis of the present paper for this case and use the framework developed in [18,90] to treat conformal coordinate transformations and Weyl rescalings separately.
9,045.4
2021-08-26T00:00:00.000
[ "Physics" ]
Evaluate the performance of certification competence vocational students through certification institute profesi-p1 program computer engineering expertise and tissues at south sumatra ABSTRACT Introduction The implementation of the skills in one piece smk is important in the implementation of national examinations for student.In addition permendiknas number 28 year 2009 have explained that results from the implementation of the skills taken as an indicator of in standards of graduates, for stakeholders related information on competence would be used as owned and controlled the labor.Skills to the implementation of the meeting requirements and excellent standart appointed equipment and materials and equipment needed to support the implementation of competency test.One of essential competency test is in the verification of test site. In practice a lot of competency test found that enforcement is not in accordance with the guidelines of. as expected. It can be seen with less facilities for the execution of the competency test and allocation of time used also. is still inadequate. Moreover in terms of the implementation of competency test had not reflected in accordance with standard capability owned required in the workplace because they have not been test-fired competence through professional certification institute , but that is so vital skills test and recognition of Evaluate the performance of certification competence... competencies owned, schools for graduate students so very necessary research on vocational competency test through professional certification institute. Competency test in one success in implementation of schools accomplished when all competency test have aspects in the value of the quality of being high. Aspek in the covering the aspects of context, input, party, and prouduct.Context, covering the policy the purpose, demands capacity building and the graduates computer techniques and tissues at the business world progress / industry and science and technology.The input include support of human resources (assessors), infrastructure, device asesmen and a competency test.Party, the allocation of time, covering work procedures, the the observation skills.The product covering the documentation or value the competency and a certificate of competency. Based on the object has done by researchers in state 1 South Indralaya Ogan Ilir, through interviews with of assessors technique of computers and tissue in 2020, june shows that in fact still a problem occurring in the implementation of the vocational certification of them 1) competition in labor market Indonesia with labor in foreign to Indonesia after the mea; 2) there are many graduates Jakarta / SMK students have a job in accordance with area of expertise in causeSMK graduates not of things in with competence required to the business / industry; 3) vocational there were still many schools that are not yet ready in implementing [1] about revitalization SMK; 4) vocational school graduates have not implement certification vocational school graduates of so limited vocational So that the limited vocational high school graduates certificates of competency of national board certification profession (BNSP ); 5 ) have had little vocational high school in partnership with the business world / industry in an effort to marketing 6 ) the certification test competence vocational school vocational students program computer engineering expertise and tissue by certification institute profesi-p1 in south sumatra province have never done Evaluation. The issue, suggests an indication that certification vocational competence there are many there is a shortage of.To improve certification vocational competence in smk located in the South Sumatra , badly needed evaluation of the implementation of these activities to understand the extent of the success and he knows deficient so that can be repaired certification vocational competence in the future .With safety evaluating the certification vocational competence is expected to be known how will the process of vocational certification done so that it can produce school graduates who is really ready to work and competitive in search of employment opportunities either by berwirausaha herself or ended up being in labor firms working with a standard competence The evaluation is done to know how the level of between the activities conducted by vocational school with the guidebook certification to student.Was a process systematic evaluation process of portraying, covering have, reported, and apply descriptive information and evaluate the information about the merits of some object, feasibility, honesty, safety, [2]. It with it so program evaluations from lack of to produce a solution so that when there are improvements in the will to come.Hence, then the researcher would will review more in the deferent of " certification evaluation vocational student through certification institution profesi-p1 skill computer program technique and tissues at selatan use cipp sumatra modelCIPP". Method This research use qualitative research was conducted as procedure research that yields data in the form of descriptive of words written or spoken of other people and behavior that observed. So, in the qualitative study although the data obtained can be counted and delivered in the form of the figures, but analyzing of qualitative. A subject in this research is a head of school , head of part the field of certification , assessors competence , and school tuition class xii skills computer technique and tissues .For the number of school tuition class xii technique computer and tissues were 106 people with details 25 school tuition from SMKN 1 Penukal,Penukal Abab Lematang Ilir and 81 school tuition from SMKN 1 Indralaya South Ogan Ilir.Technique data collection used in this research was the questionnaire, interview, and documentation.. Results and Discussions Based on the research findings of known that certification activities handled by committee members and certification institution profesi.setiap committee members having the tasks and functions of each.The tasks and functions of the written in the basic vocational certification state Penukal and 1 state Indralaya 1 year South 2020 / 2021, a and certification institute profession also have in accordance with the national body sertificacy profession. test of material, and competency test conditions have met the requirements set.Process giving the recommendation in state Penukal and 1 state 1 South Indralaya done by a profession certificates.The school only competency test results accepted are given sertfikasi profession institutions in the certification decision. And judgment has been in line based on standards used and processes pengukurannya, [3] Be grouped into (1), competency test that is a process of measuring and assessing their mastery of expertise someone based on against competencies, were required and apply in the company/a particular industry (enterprise standard) and or on the basis demands the needs of certain jobs; and (2) the profession, the process of skill and judgment mastery someone, penguasaannya against a standard based on ability competence they to manifest the professional and authorized a particular occupation, on the according to the standard of official raw which prevail at a kind of professional skill certain. The evaluation process in vocational 1 Penukal and vocational 1 Indralaya South needed repairs especially the professional test. This is done so that vocational 1 Penukal and vocational 1 Indralaya South graduates have a guarantee that are relevant and competent. Evaluasi third evaluation process certification. vocational components. Components that are not yet available in truth is giving a certificate of competency. Certificates that were issued after implementation sertifkais vocational in state vocational schools 1 penukal and state vocational schools 1 Indralaya South only consisted of 1 kind of the setifikat competence issued by the agency national setifikasi profession Dakam this is certification institution the profession melksanakan competency test. Result an assessment of steps certification vocational shows that 60 % of the use of steps certification vocational in state vocational schools 1 Penukal and state vocational schools 1 Indralaya South a pretty reasonable by steps certification vocational set by the national profession certificates.Analysis the achievement of competence lessons in school needs to be done to know facilities that is not available in school but needed in an effort to meet competence school tuition in accordance with the curriculum set . With this move will be to know what facilities not available on schools but in fulfilling the needs of students in the curriculum The matter certification activities necessary to certificate vocational event participants for certificate can implement directed and suitable for the need to certification sekolah.penyusunan matter vocational involving the principal, teachers, administrative staff, school dudi, and parents.For coordinating the programs established in the competence, : and certification. The importance of the analysis of competence those lessons in schools and vocational preparation of material certification activities for the certification in vocational school vocational Penukal 1 and 1 south must state indralaya improved and the addition of , that the next. Gets better and on the results of the evaluation process , are needed to improve the evaluation process 1; certificate the process of giving 2 and 3 improvement in the vocational certification.
2,004.8
2021-08-05T00:00:00.000
[ "Engineering", "Education", "Computer Science" ]
Investment climate in Ukraine: aspects of investment activity at a regional level . The main objective of this study is to examine trends and prospects of investment activity in Ukraine. This paper investigates the term “investment” and considers the role and importance of investment in social and economic development of the state. Based on the data provided by the State Statistics Service of Ukraine, the current state of foreign investment promotion to Ukraine has been analyzed. The growth rate of the investment climate in Ukraine has been distinguished. It has been defined that it is impossible to reach sustainable economic growth and overcome the consequences of the financial crisis without the injection of investment into both individual sectors of the national economy and the economy of entire regions. The article examines the dynamics of investment attraction into the regions’ economy and defines sources of such investment. As a result of the research, investment activity of regions has been outlined. Investment efficiency of regions has been assessed. Based on the findings, suggestions to improve the investment climate in Ukraine and increase regions activity have been offered. Model for investment attractiveness assessment, which takes into account availability and sufficiency of own financial resources, that define the level of region’s financial health, has been suggested. Introduction Modern mechanisms for the development of economic systems influenced by the market transformations dictate the need for reforming of existing approaches to the provision of financial resources of the state. Unstable economic and political situation in Ukraine, exacerbated by the negative consequences of the global financial crisis, necessitates closer attention of the country's leadership to the processes of economic growth stabilization. Since Ukraine has chosen the way of European integration, an urgent need has arisen for the adapting of innovative development model that would ensure the economic growth of the country, increase its potential and economic security, and allow resolving existing problems of social protection of the population. Ukrainian and foreign scientists and practitioners, as well as the pace of European countries development, prove that investment, as a component of the innovative model of economic development, makes it possible to improve and ensure sustainable economic growth. Accordingly, investment is a significant factor of development, both at the state and regional levels. Under the conditions of today, the process of effective management system development, investment activities regulation, as well as the activation of an investment mechanism aimed at developing strategic and priority sectors of the economy are of key importance. At this stage, investment activity in Ukraine, despite some positive trends, faces certain problems. Thus, the investment activity of Ukrainian and foreign investors is significantly constrained due to the unfavorable investment climate, imperfection of the legal framework; inadequate generation of investment projects and programs; as well as weak development of investment instruments and investment market. It is also worth noting that there exist certain regional imbalances in the concentration of investment resources and the outflow of foreign direct investment. Taking into account all these factors, we understand that nowadays there is a huge demand to improve existing mechanisms for attracting investments, and support them at the state level. Various aspects of the state investment policy implementation are constantly in the focus of attention of scientists and practitioners of different countries. Adequate investment mechanism predetermines both the inflow of capital into the country and the pace of economic development. In order to determine the place and role of investments in the economy of Ukraine and its regions, we consider it essential to investigate the definition of the term "investment". A detailed study of investment and its impact on the pace of economic development of the country was elaborated in the middle of the twentieth century. Representatives of neoclassical and neo-Keynesian schools argued that the rate of economic growth is directly influenced by the processes of capital accumulation. R. Harrod proved that investment generates savings [1]. The same hypothesis was held by R. Lucas and R. Solow [2,3]. According to the researchers, investment increases due to a favorable taxation regime or market volume growth, resulting in increased aggregate demand, employment and output levels [4]. Despite the similar approaches to determining the role of investment in the economy, the very idea of investment is approached and defined differently in different spheres of the economy. Economic theory defines investments as expenditures on acquisition, maintenance, expansion, renovation and upgrading of fixed capital. From the point of view of financial science, investments are all types of assets that are invested in economic activity with the goal of generating profit [5]. The macroeconomic approach considers investments to be a component of the gross domestic product (GDP), which provides capital gains and is not consumed at the present time. Microeconomics defines investment as a process of creating new capital [6]. The term "investment" is very broad and comes from Latin investio "to clothe in, dress, cover" or investire "endow", which means "committing money, values, means [7, p. 320] or long-term investment of capital into an enterprise, or a business" [8, p. 260]. Having carried out the research, we believe it is worthwhile to note that both Ukrainian and foreign scientists hold the view that investment is a process of committing capital with the expectation of its further increasing. Practical implementation of investments is provided by investment activities, which is a combination of income generating activities of citizens, legal entities and the state that aim at embodiment of their economic interests [9, p. 22]. Investment activity, in its essence, consists of searching for investment resources, selection of the most productive investment objects, development of investment programme, analysis of the investment project, building of investment portfolio, and realization of investments. Accordingly, investment activities influence economic growth rate of both the country on the whole and its regions, although it requires significant state regulation. Findings Over the past 20 years, Ukraine has created favorable conditions for investment activities promotion. Thus, a certain legal framework [4,[10][11][12], which defines the main mechanisms for investment policy implementation in the country, has been established. But, despite the positive trends at the level of government regulation, the state of investment activities remains at extremely low level, which is significantly aggravated by a permanent reduction of investment resources [5]. Among the main reasons for the decline in investors' activity are limited domestic savings and the failure to recover capital investments. The level of direct investment is considered to be an indicator of investment activity of any state. According to the State Statistics Service of Ukraine in 2017 the country's economy received 1.6 billion US dollars of direct investments from 76 countries. This indicator seems to be the lowest over the past seven years. Thus, in 2011 the level of investment was $ 6.0 billion. The received funds were directed mainly to the already developed sectors of economy, namely, to institutions and organizations that carry out financial and insurance activities -26.1%, as well as to industrial enterprises -27.3% [13]. Ukraine's major investors included Cyprus -25.6%, the Netherlands -16.1% and Russia -11.7%, Great Britain -5.5%, Germany -4.6%, Virgin Islands (Brit.) -4.1 %, and Switzerland -3.9%. Among the positive trends in 2017 the growth of the share of capital investments from public funds, especially from local budgets ( Table 1) was observed. Thus, there are changes in the structure of capital investments in terms of financing sources in favor of increasing of the share of enterprises' own funds (up to 69.9% of the total volume of capital investments compared to 69% in 2016) and budgetary funds (up to 12 7% compared to 10,1% in 2016). Along with direct investments, capital investments are of utmost importance for the economy of the country. The volume of capital investments increased by 22.1% in 2017 compared to 2016. Among industries that still have high investment potential are: manufacturing industry -33.1%, construction -12.3%, agriculture, forestry and fisheries -14.0%. In order to create simplified favorable conditions for foreign investors in Ukraine and prevent corruption in Ukraine, the law No. 1390-VIII "On amendments to certain legislative acts of Ukraine concerning the abolition of mandatory state registration of foreign investments"( May 31, 2016) and the Law of Ukraine "On amendments to certain legislative acts of Ukraine concerning elimination of barriers to attraction of foreign investment"(May 23, 2017) were adopted and came in force, that made it possible to settle the basic aspects of the implementation of investment policy. Nevertheless, no less important is the issue of how to increase investment opportunities and potential of regions of Ukraine. Analysis of the current practices of investments promotion for regions development shows that it is impossible to solve high-priority tasks of local authorities by means of market self-regulation. Deepening of regional disparities requires authorities close attention to the issue of providing regions with investment resources, as well as studying the factors that shape the investment climate and increase the investment activity of the regions. It should be noted that at this stage there is no single approach to the definition of the term "investment activity" in present studies. Some authors compare it with the concept of "investment potential" and "investment resources". Summarizing we can state that the investment activity of regions is the interaction of internal and external factors of the region's development, where internal factors are investment opportunities while the external factors are investment processes and investment climate. The social and economic state of the regions of Ukraine is characterized by crisis phenomena, the emergence and development of which is caused by not only external challenges and unfavorable macroeconomic trends, but to a large extent by the negative consequences of industrial enterprises restructuring and political crisis. The increase in the volume of domestic and foreign investment is an important prerequisite for the gradual recovery of economic growth which can be reached by creating favorable investment environment in Ukraine that will provide the appropriate regulatory and legal guarantees to foreign and domestic investors. Taking into account the insufficiency of budgetary funds at region level especially the issue of attracting investment funds remains quite urgent as well as assessment of investment attractiveness of the regions. It is the investment attractiveness of the region that makes it possible to attract resources and provide financing for the programs of social and economic development of the region. Nowadays there exist a significant number of approaches and methods both descriptive and rating ones which make it possible to figure out the region's level of attractiveness for investors. It is worthwhile to note that investment attractiveness is a kind of integral indicator that allows evaluating the effectiveness of invested funds on the basis of the financial and property state of the region. Also, investment potential reflects the development prospects of the region on the basis of fixed investments [14]. In turn, the assessments of the investment attractiveness of the regions, using international methods developed by such institutions as Institutional Investor [15], Euromoney [16], Business Environment Risk Index (BERI), Transparency International, and Moody's Investor Service [17] are carried out at the macro level, although they can also be applied at the meso-level. Their main tool is surveying market participants as to the barriers to and conditions of economic activities in the given region. Such assessments use the following indicators: political and legal environment; economic environment; resources and infrastructure, socio-cultural environment and ecology [15][16][17][18]. The rating assessment of the attractiveness of the regions is taken into account when making decisions regarding investing (Table 2) [19]. In turn, such scientists as I. Zablodskaya and O. Shapovalova [20, p. 69-71] suggest matrix method to assess the investment attractiveness of the region. Based on such two criteria as industrial activities and regional industry specialisation, a matrix of sectoral specialisation of regions can be built. To assess the investment attractiveness of the regions, it is advisable to determine the indicators by which selected partial indices can be calculated based on selected groups. Among such indicators we should highlight the following: indicators for assessing the economic development of the region (volume of products sold (industry and agriculture), UAH, per capita; volume of services sold in the region, UAH, per capita; retail turnover, UAH, per capita; number of small business enterprises per 10,000 people in the region, units; depreciation of fixed assets, %); indicators for assessing the external economic openness of the region (exports (imports) of goods and services in the region, USD, per capita; export-import coverage ratio %; ratio of total exports to Gross Regional Product, %; ratio of import volumes to private consumption expenditures of the population of the region, %); indicators for assessing the innovative and investment activities of the region (the share of innovative products sold in the total volume of industrial products sold,%; the share of innovative enterprises in the total number of industrial enterprises, %; foreign direct investment in the region's economy, USD, per capita; investment in fixed assets, UAH, per capita; investment in fixed assets and housing construction, UAH, per capita; cost of introduced fixed assets, UAH, per capita), indicators for assessing level of the region's infrastructure development [14,21]. In turn, the Rating Agency "Euro-Rating" calculates the aggregate rating of the attractiveness of regions on the basis of both investment activity (capital investment, foreign investment and construction) and the socioeconomic effect of investment (salaries, housing, employment and services provided). Using methodology elaborated by the agency [22], as well as data gained from the State Statistics Service of Ukraine, we can estimate the degree of attractiveness of the investment opportunities of the regions (Table 3). It should be noted that the main component of the methodology is rating assessment according to the 200-point rating scale [22], where: -Maximum (over 200 points) (ineA) shows that the investment policy is effective. High investment activity provides advance increment rate of the basic indicators of the social and economic sphere [22]; -High (from 181 to 200 points) (іneB) shows that the investment policy is effective. The region is characterized by significant investor activity, which provides the growth of most of basic indicators in the social and economic sphere; -Above average (from 161 to 180 points) (іneС) -the investment policy is somewhat above the average level. Investment activity is not high enough, but it allows to ensure the growth of a number of socio-economic indicators; -Average (from 141 to 160 points) (іneD) shows that the investment policy is neutral. The activity of investors, as a rule, is moderate. Most indicators of the socio-economic development are at an average level; -Below the average (from 121 to 140 points) (ineE) shows that investment policy requires improvements. The pace and volume of investment is generally lower than the national average. Most of the indicators of socio-economic development are lower than the average; -Low (from 101 to 120 points) (іneF) shows inefficient investment policy. Investors are reluctant to direct funds to the city or region economy. Most of the indicators of socioeconomic development are lower than the average; -Minimum (less than 101 points) (іnеG) shows that the investment policy is absolutely inefficient. Investors are reluctant to direct funds to the city or region economy. Almost all indicators of the socio-economic development are below the national average. Following the results of 2017 Poltava region took the first position in the investment efficiency rating assessment having received 225 points. For the whole year, this region never dropped out of the group of leaders and its rating did not fall below 200 points. Dnipropetrovsk and Kharkiv regions also received ineA rating level (201 and 204 points respectively). The number of regions that have high investment efficiency (ineB) remained unchanged; among them are Odesa, Kyiv and Vinnytsa regions. Odesa region has lost one position, compared with the previous period. In general, the results of 2017 were slightly worse than in the previous period. Nevertheless, the number of regions with the rating assessment below the average remained unchanged. However, it should be noted that in comparison with 2016, in 2017 there is no region at ineG level, which has a positive impact on the overall investment image of the country (Figure 1). It should be noted that in order to obtain high rating assessment the region should not only succeed in investment attraction but should also use such investment efficiently, which is only possible when the socio-economic indicators, that characterize the level of wellbeing of the population of the region, improve. Conclusions Modern practice of regional development management has faced a number of problems, namely: lack of own financial resources; distribution of transfers while drafting the local budget is based not on the actual requirements, but on the present fiscal capacity of the region; securing of the local budget balance is provided not by tax methods, but by regulation through transfers buildup. The solution of these problems, in our opinion, lies in improving the financial support for the investment development of the region. And namely, in the creation of its own revenue base sufficient for the shaping of financial capacity and maintenance of investment activity and efficiency of the region. To this end, we have developed a model for the interconnection of the financial capacity of the region with its investment activity and efficiency. In the suggested model, the assessment of investment attractiveness is based on the availability and adequacy of the own financial resources, which characterize the level of the region's financial health. In order to assess the investment activity of the region, it is expedient to use formal rates for estimating the growth of financial capacity by sources of generation. Such rates include total budget revenue increment rate; tax revenue increment rate; non-tax revenue increment rate; sale proceeds increment rate; market services proceeds increment rate; grant-in-aid amount increment rate; subvention and subsidy amount increment rate. The above-noted rates contribute to determining the level of the region's financial health, which is a direct indicator of stability and investment attractiveness. Under limited financial resources and minor investment potential of the regions at the present stage, in order to attract additional funds, it is advisable to develop a set of recommendations on creating an effective financial boost system for all participants in financing investments in the regional development, that are to be organized into two groups: benefits and incentives. Benefits for the participants of investment financing, depending on their nature, can be presented in the form of: -Tax benefits -investment tax credit, special tax treatment of certain activities, accelerated method of depreciation, tax rate decrease, deferral or instalment plan for tax payments, granting tax rate of 0% for certain activities, if the participants comply with the developed and established obligations; -Budgetary fiscal incentives -grants-in-aid from the local budget for legal entities and sole proprietors, subventions and subsidies for local budgets from different levels of budgets; infrastructure order placing, equity investments in acting or greenfield enterprises; guarantees. The mechanism for selecting the best investment project that participated in the competitive selection process can become an incentive, and as a result, grants from the budget and a preferential taxation mechanism are guaranteed. Boosting of these processes should also include strengthening the intellectual property protection, budgetary support for innovation activity and the ongoing updating of investment projects databases. It is necessary to inform population and investors about investment opportunities in the region, including regular updating of the investment passports of settlements. At the same time it is necessary to advertise the proven track record of investment projects and make overtures to investors for successful practices. So, the effective model of investment attractiveness for the region is the combination of effective forms of influence on the process of resources shaping and distribution in the region in order to provide its sustainable development and guarantees for the population. Thus, the conducted studies allowed to prove empirically the existence of interrelation between GDP volumes and the level of innovative development of the region, as well as to determine the top factors of regional development in different situations, depending on the degree of their innovative development. Unfortunately, numerous quantitative indices often do not fully reflect the investment effect in economic growth.
4,650.8
2018-01-01T00:00:00.000
[ "Economics" ]
FvNST1b NAC Protein Induces Secondary Cell Wall Formation in Strawberry Secondary cell wall thickening plays a crucial role in plant growth and development. Diploid woodland strawberry (Fragaria vesca) is an excellent model for studying fruit development, but its molecular control of secondary wall thickening is largely unknown. Previous studies have shown that Arabidopsis NAC secondary wall thickening promoting factor1 (AtNST1) and related proteins are master regulators of xylem fiber cell differentiation in multiple plant species. In this study, a NST1-like gene, FvNST1b, was isolated and characterized from strawberry. Sequence alignment and phylogenetic analysis showed that the FvNST1b protein contains a highly conserved NAC domain, and it belongs to the same family as AtNST1. Overexpression of FvNST1b in wild-type Arabidopsis caused extreme dwarfism, induced ectopic thickening of secondary walls in various tissues, and upregulated the expression of genes related to secondary cell wall synthesis. In addition, transient overexpression of FvNST1b in wild-type Fragaria vesca fruit produced cells resembling tracheary elements. These results suggest that FvNST1b positively regulates secondary cell wall formation as orthologous genes from other species. Introduction The secondary cell wall (SCW) is typically composed of lignin, cellulose, and hemicelluloses (xylan and glucomannan and galactoglucomannans). The SCW is formed inside the primary cell wall after the cell is fully expanded. SCW structures have large impacts on the characteristics of plant cells and organ development and play important roles in the dehiscence of anthers and silique pods, enhance mechanical support of organs, facilitate water transport, and provide a barrier against invasive pathogens [1][2][3][4]. The SCW is characteristically formed in xylem vessels and fibers and is crucial in the development of secondary xylem. The deposition of secondary walls reinforces stability of these cells, allowing them to provide structural support and protection [5]. The secondary walls of anther endothecium have striated patterns similar to those in tracheary elements. These secondary wall thickenings are necessary for anther dehiscence, and they generate the tensile force necessary for the rupture of the stomium [6]. Lignification in the endodermal layer of the valve margin of silique pods is necessary for their dehiscence, generating tension via desiccation and leading to pod shattering [7][8][9].Thus, SCW provides crucial biological roles in various organs, and unveiling mechanisms behind the regulation of SCW formation has been an important topic in the plant developmental research. Extensive studies have been performed in multiple species from angiosperms such as Arabidopsis and Zinnia elegance to resolve the transcriptional network controlling xylem vessel differentiation and SCW formation. Several proteins in the plant-specific NAC (NAM, ATAF1/2 and CUC2) transcription factor (TF) family have been found to play a pectin lyase) and contributes to cell wall remodeling [46]. Although FcNAC1 was reported to be clustered with VND family members in the phylogenetic analysis, whether it has the ability to induce the SCW thickening as members from other species has not been tested yet. FaRIF regulates ABA biosynthesis/signaling and cell wall degradation/modification [45]. Lignin synthesis is an important pathway regulated by VNSs. VNSs may have additional significance in strawberry fruit development besides regulators of SCW development, since biosynthesis pathway of lignin and anthocyanin, an important factor of color formation in strawberry fruit, share common precursor molecules [47,48]. Balance between lignin synthesis and anthocyanin synthesis needs to be well-coordinated for proper strawberry fruit development, and regulation towards VNSs may have roles in this process. In this study, we isolated and characterized a VNS subfamily gene in Fragaria vesca and named FvNST1b. The sequence information, subcellular localization, and expression pattern of FvNST1b were investigated. Transgenic Arabidopsis plants overexpressing FvNST1b showed abnormal SCW thickening and induction of SCW-associated genes. Ectopic xylem cells were also produced by transient overexpression of FvNST1b in strawberry fruits. Our work demonstrated that FvNST1b of the NAC transcription factor family in strawberry possess conserved activity to promote SCW development, and may play critical roles in SCW formation in fruit. Cloning and Sequence Analysis of FvNST1b Characterization of VNS genes from Fragaria vesca has not been performed yet. We identified Fragaria vesca VNS candidate genes from the SGR: Strawberry Genomic Resources database (http://bioinformatics.towson.edu/strawberry/; accessed on 1st November 2018). We performed multiple sequence alignment of Arabidopsis VNS family members and Fragaria vesca VNDs and NSTs. These protein sequences contain a highly conserved region towards the N-terminal, corresponding to the NAC domain, which is divided into five subdomains, A to E (Figure 1). To investigate the relationship between FvVNSs and AtVNSs, a phylogenetic tree was constructed using their amino acid sequences. The phylogenetic tree indicated that all members are divided into VND, NST, and SMB subclades, with FvNST1b together with AtNST1 protein grouped into one cluster ( Figure 2). The FvNST1b is annotated to encode a protein of 365 amino acids with an estimated molecular mass of 40.8 kDa and an isoelectric point of 6.27. These results suggested that FvNST1b is the closest counterpart protein of AtNST1, and a plausible candidate for the regulator of secondary wall thickening. Amino acid sequences alignment of FvVNS and NAC proteins from Arabidopsis including AtVNSs. The proteins were initially aligned using Clustal omega. The NAC domain was marked with solid lines. Figure 2. Phylogenetic tree of FvVNS and AtVNS proteins. The proteins were initially aligned using Clustal omega and then submitted for phylogenetic analysis using MEGA X software. The phylogenetic tree was constructed using the neighbor-joining method with 1000 bootstrap replications. Numbers indicate bootstrap values for the clades that received support values of over 50%. Subcellular Localization of FvNST1b Protein As a transcription factor, FvNST1b is expected to function in the nucleus. In order to examine its subcellular localization in vivo, we generated a vector containing coding region of FvNST1b fused with GFP reporter gene. The fusion gene plasmid and GFP control plasmid were transiently transformed into Nicotiana Benthamiana (hereafter tobacco) leaves and strawberry fruits. At 3 days after injection, a strong fluorescence signal was Figure 2. Phylogenetic tree of FvVNS and AtVNS proteins. The proteins were initially aligned using Clustal omega and then submitted for phylogenetic analysis using MEGA X software. The phylogenetic tree was constructed using the neighbor-joining method with 1000 bootstrap replications. Numbers indicate bootstrap values for the clades that received support values of over 50%. Subcellular Localization of FvNST1b Protein As a transcription factor, FvNST1b is expected to function in the nucleus. In order to examine its subcellular localization in vivo, we generated a vector containing coding region of FvNST1b fused with GFP reporter gene. The fusion gene plasmid and GFP control plasmid were transiently transformed into Nicotiana Benthamiana (hereafter tobacco) leaves and strawberry fruits. At 3 days after injection, a strong fluorescence signal was detected in the nucleus of tobacco leaf epidermal cells ( Figure 3A), and strong GFP signal in nucleus was detected in the strawberry fruits at 4 days after injection ( Figure 3B). In some cells, the GFP signal was also weakly detected in the surrounding area of the nucleus presumably in cytosol or ER that may represent FvNST1-GFP protein unsorted to the nucleus. At 7 days after induction, tobacco cells with ectopic striated cell walls are formed as in the overexpression of AtNST1 reported by others (Supplemental Figure S1) [2]. Transient overexpression of FvNST3-GFP also induced similar effects (Supplemental Figure S1). We also generated stably transformed Arabidopsis plants with FvNST1-GFP. Arabidopsis transgenic seedlings also showed strong nuclear-localized GFP signals and weak cytoplasmic/ER signals in roots ( Figure 3C). The images of bright field, GFP and merged were shown. Scale bars represent 10 µm for zoom and 100 µm for root, respectively. Expression Analysis of the FvNST1b Gene Tissue-specific expression analysis of FvNST1b was performed by qRT-PCR using various tissues from strawberry plants. FvNST1b displayed a differential expression pattern in F. vesca ( Figure 4). FvNST1b transcripts were almost undetectable in leaf and white fruit, and its expression was detected in vegetative parts of strawberry at relatively low levels, including in roots and stems, whereas its expression in flowers and green fruits are significantly higher, with the highest level in green fruits. This observation suggested that FvNST1b mainly function in the fruits of strawberry at the earlier developmental stages. Overexpression of FvNST1b Induces Ectopic Thickening of Secondary Walls in Various Tissues of A. thaliana In order to test if FvNST1b has the ability to promote SCW deposition, we expressed FvNST1b-GFP ectopically under the control of the CaMV35S promoter (35S:FvNST1b-GFP) in transgenic Arabidopsis plants. The 35S:FvNST1-GFP b plants were usually smaller and grew more slowly than wild-type plants. Ectopic expression of FvNST1b-GFP induced ectopic lignified secondary wall thickening in various tissues, including anthers, stamens, ovules, stems, leaves, and root tissues as reported for the overexpression of Arabidopsis NST1 [2]. Epidermal cells with ectopic secondary wall thickening typically had a striated appearance similar to that of tracheary elements ( Figure 5). These observations suggest that the abnormal appearance of leaves and floral organs of 35S:FvNST1b-GFP plants was due to the ectopic accumulation of lignified materials, reflecting previous reports that NSTs are regulators of secondary wall thickening in various tissues. Overexpression of FvNST1b Induces SCW Formation in Strawberry Fruits To further confirm that FvNST1b has the ability to promote SCW deposition in strawberry fruit, we transiently overexpressed the FvNST1b gene by using the agrobacterium infiltration into S7 fruits of Fragaria vesca ( Figure 6), which is at the green stage and make the transition to ripening [49]. After four days from infiltration, the 35S:FvNST1b-GFP infiltrated strawberry fruits exhibited GFP signals ( Figure 6A-D). After five days, the 35S:FvNST1b-GFP infiltrated strawberry fruits exhibited enhanced lignification phenotypes, along with many cells having striated appearance similar to that of tracheary elements ( Figure 6E,F,J-L). The 35S:FvNST1b-GFP infiltrated fruits tended to be more pliable and the space among seeds closer than that of 5d after the injection of PGWB505 vector-control infiltrated fruits ( Figure 6G-I,M-O). Longitudinal sections of strawberry fruits stained for lignin and cell wall structure confirmed that overexpression of FvNST1b-GFP resulted in excessive SCW deposition in strawberry fruit cells, indicating their ability to promote SCW formation in strawberry fruits. Enhanced Gene Expression of SCW Related Genes in 35S:FvNST1b Transgenic Arabidopsis Plants To further prove the ability of FvNST1b to promote SCW development as in the orthologous genes from other species such as AtNST1, we examined the effect of FvNST1b-GFP overexpression in Arabidopsis on gene expression of known downstream genes for AtNST1 (Figure 7). We examined the expression of IRREGULAR XYLEM3 (IRX3; Enhanced Gene Expression of SCW Related Genes in 35S:FvNST1b Transgenic Arabidopsis Plants To further prove the ability of FvNST1b to promote SCW development as in the orthologous genes from other species such as AtNST1, we examined the effect of FvNST1b-GFP overexpression in Arabidopsis on gene expression of known downstream genes for AtNST1 (Figure 7). We examined the expression of IRREGULAR XYLEM3 (IRX3; encodes a cellulose synthase), IRX4 (encodes a cinnamoyl CoA reductase), IRX12 (encodes a putative laccase) as genes known to be upregulated by AtNST1 overexpression [50][51][52], and HOMEOBOX GENE8 (ATHB-8), which is involved in the vascular developmental process upstream of VNDs/NSTs [53]. encodes a cellulose synthase), IRX4 (encodes a cinnamoyl CoA reductase), IRX12 (encodes a putative laccase) as genes known to be upregulated by AtNST1 overexpression [50][51][52], and HOMEOBOX GENE8 (ATHB-8), which is involved in the vascular developmental process upstream of VNDs/NSTs [53]. The expression of the IRX3, IRX4, and IRX12 genes was enhanced 1-to 10-fold in all four of the independent 35S:FvNST1b-GFP transgenic lines examined, as compared with the wild type. In contrast, ATHB-8 was not upregulated in 35S:FvNST1b-GFP plants. Our results suggest that FvNST1b has the ability to function as a crucial regulator for secondary wall thickening by inducing key downstream genes similar to their counterpart transcription factors in other plant species. Expression of genes related to the differentiation of tracheary elements were analyzed in 2 WT plants as controls and 4 independent transgenic Arabidopsis lines overexpressing FvNST1b-GFP by quantitative RT-PCR. Each bar represents the amount of the transcript of a gene relative to that of the internal control. Error bars represent ± SD (n = 3). Asterisk indicate a significant difference compared to the WT1 by t-test (****, p < 0.0001; ***, p < 0.001). Discussion Strawberry is one of the most economically important fruit crops and has been considered a genuine example of a plant showing non-climacteric fruit ripening [54,55]. The ripe strawberry fruits undergo continual softening and easily become rotten. Thus, improvement of strawberry shelf life has become an important factor in current breeding programs, even when these quality attributes are controlled by a complex genetic background [56]. Secondary wall thickening provides mechanical support for various plant tissues, and thus the SCW formation may contribute to fruit firmness [25,57]. NSTs/VNDs are master transcriptional switches regulating the developmental program of SCW biosynthesis by activating downstream transcription factors [25,58]. Although NAC transcription factors and the lignin biosynthesis have been studied in strawberry fruit development recently [45,46], their contribution regarding the molecular control of secondary The expression of the IRX3, IRX4, and IRX12 genes was enhanced 1-to 10-fold in all four of the independent 35S:FvNST1b-GFP transgenic lines examined, as compared with the wild type. In contrast, ATHB-8 was not upregulated in 35S:FvNST1b-GFP plants. Our results suggest that FvNST1b has the ability to function as a crucial regulator for secondary wall thickening by inducing key downstream genes similar to their counterpart transcription factors in other plant species. Discussion Strawberry is one of the most economically important fruit crops and has been considered a genuine example of a plant showing non-climacteric fruit ripening [54,55]. The ripe strawberry fruits undergo continual softening and easily become rotten. Thus, improvement of strawberry shelf life has become an important factor in current breeding programs, even when these quality attributes are controlled by a complex genetic background [56]. Secondary wall thickening provides mechanical support for various plant tissues, and thus the SCW formation may contribute to fruit firmness [25,57]. NSTs/VNDs are master transcriptional switches regulating the developmental program of SCW biosynthesis by activating downstream transcription factors [25,58]. Although NAC transcription factors and the lignin biosynthesis have been studied in strawberry fruit development recently [45,46], their contribution regarding the molecular control of secondary wall thickening is largely unknown. In the present study, we described the isolation and characterization of FvNST1b, an NST1-like homolog from Fragaria vesca. Amino acid sequence alignment and phylogenetic analysis suggested that FvNST1b is a member of the NST class of NACs. Plant NAC domain proteins are one of the largest groups of plant-specific transcriptional factors and have been reported to participate in many developmental processes, including SCW formation and biotic and abiotic stress responses [59,60]. Amino acid sequences of NAC proteins typically contain an NAC domain; five highly conserved subdomains at the N-terminal. The five highly conserved subdomains are also present in the FvNST1b sequence. Our phylogenetic analysis placed FvNST1a and FvNST1b as the closest homolog of Arabidopsis NST1 and NST2. Indeed, results of our overexpression experiments suggested that FvNST1b and FvNST3 have the ability to promote SCW development as in Arabidopsis NSTs. In our phylogenetic analysis, we could not find counterparts for some members of the VND-related Arabidopsis proteins, namely SOMBRERO (SMB), BEARSKIN1 (BRN1), and BRN2 among strawberry VNS candidates [35]. Those proteins still have the ability to induce ectopic SCW deposition when overexpressed, but they are involved in root cap development in Arabidopsis [34,35]. It is speculated that there may be other unidentified members of the VND family or other transcription factors in strawberry to regulate the development of strawberry root caps. FvNST1b is expressed preferentially in strawberry green fruit, suggesting that it has important roles in the regulation of strawberry fruit development. Overexpression of FvNST1b in Arabidopsis caused ectopic deposition of SCW ( Figure 5), in agreement with previous studies. In addition, overexpression of FvNST1b in strawberry fruits also caused ectopic deposition of SCW along with lignin accumulation, fruit shrinkage, and fruit color change. Anthocyanin contributes to the fruit color in strawberry. Biosynthetic pathways for lignin and flavonoids, including anthocyanin, share common precursors from the general phenylpropanoid pathway [48]. Several TF genes in the MYB family are reported to corepress or co-activate genes involved in the biosynthesis of lignin and flavonoids. There are also cases of regulation towards lignin or flavonoid synthesis to achieve proper balance of carbon flow between lignin and flavonoids. Some of those members such as AtMYB20 are reported to be under regulation by VNSs [61]. Thus, VNSs in strawberry including FvNST1b may contribute to co-regulate and/or balance lignin deposition and anthocyanin synthesis, which is a crucial factor for strawberry fruit quality and commercial value. Future analysis of contribution of VNSs on the regulation of flavonoid-synthesis-related genes will be important to further prove this idea. A nuclear localization signal was predicted within FvNST1b amino acid sequences, and the transient transformation of tobacco and strawberry fruit cells with FvNST1b fused to a reporter gene showed that FvNST1-GFP localizes to the nucleus. Furthermore, nuclear localization was detected in 35S:FvNST1b-GFP transgenic Arabidopsis plants. Other members of this NAC family also exhibited nuclear localization: an NAC from S. lycopersicum was shown to be located in the nucleus when ectopically expressed in onion epidermal cells, as was also the case for AtNAC2 [62]. MtNST1 from alfalfa, described as a SCW master switch, was also identified in the nucleus of epidermal tobacco cells [63]. The tomato SlNAC3 has been localized in the nucleus of onion epidermal cells by transient expression analysis [62]. Seven GhSWN proteins from cotton all located in the nucleus and were consistent with their functions as transcription factors [27]. In order to test whether FvNST1b has ability to activate SCW-related genes as in AtNSTs, the expression of IRX3, IRX4, IRX12, and ATHB-8 were examined in 35S:FvNST1b transgenic plants, which are involved in the differentiation of tracheary elements upstream or downstream of AtNSTs [2]. IRX3, IRX4, and IRX12 were upregulated in 35S:FvNST1b transgenic plants. In contrast, ATHB-8 was not regulated in 35S:FvNST1b plants, which is in line with previous reports in Arabidopsis. These results show that FvNST1b is a positive regulator of secondary wall thickening. Induction of downstream genes of AtNST1 in transgenic 35S:FvNST1b Arabidopsis plants further support the idea that FvNST1b acts as a transcriptional factor to regulate downstream processes of SCW development. In summary, an NST1-like gene, FvNST1b, was isolated and characterized from strawberry. FvNST1b has high sequence similarity to other NSTs homologs and contained the well-conserved NAC domain. The FvNST1b protein mainly localizes in the nucleus. FvNST1b is highly expressed in young fruit. In addition, overexpression of FvNST1b caused ectopic deposition of SCW and upregulated the expression of genes related to the differentiation of tracheary elements such as IRX3, IRX4, and IRX12 in transgenic Arabidopsis. Moreover, overexpression of FvNST1b in strawberry fruits also caused ectopic deposition of SCW along with lignin accumulation and fruit shrinkage. These results suggest that FvNST1b is a transcription factor promoting SCW thickening in strawberry. Although we were unable to detect any expression of FvNST1a in strawberry fruit, we also showed that at least another closely related member, FvNST3, which is expressed in fruit, has a similar function. Functional redundancy as well as their specialization will be explored in the future. The evidence provided will contribute to understanding the regulatory network that takes place during the development and ripening of strawberry fruit. Plant Material and Growth Conditions Diploid strawberry plants (Fragaria vesca), Yellow Wonder 5AF7 (YW5AF7) [40], planted in pots (90 mm × 90 mm × 90 mm) were used in this study. The seedlings were grown and maintained in a growth room with the following conditions: 22 • C, 60% humidity, and a 16-h photoperiod. Hand pollination was performed by using downy water bird feather to obtain pollinated fruit. Samples of root, stem, leaf, flower, and fruit were collected for tissue-specific expression assays. For Arabidopsis transformation, Arabidopsis thaliana (ecotype Columbia) was used and grown in soil at 22 • C with 16 h of light daily. DNA Preparation and Gene Cloning Genomic DNA (gDNA) of strawberry samples was isolated by the CTAB method. To clone the FvNST1b gene, the AtNST1 protein was used for a BLAST search in the strawberry genome GBrowse (http://www.strawberrygenome.org/ (accessed on 19 October 2022)), and a high homology protein with the gene locus101309102 was found. Then, the specific primers for full-length of DNA cloning were designed for FvNST1b (forward, 5 -attB1-ATG ACT GAA AAC GTG AGC AT-3 ; reverse, 5 -attB2-TTA TAT ATG ACC ATT CGA CAC GTG-3 ) and FvNST3 (forward, 5 -attB1-ATG TCT GCA GAG GAT CAA ATG-3 ; reverse, 5 -attB2-TTA TAC CGA CAG GTG GCA TAA TG-3 ) using the SnapGene program (https://www.snapgene.com/ (accessed on 19 October 2022)). PCR was performed using Primer Start Max Enzyme (TaKaRa Biotech, Dalian, China) under the following conditions: 98 • C for 30 s, followed by 34 cycles at 98 • C for 10 s, 55 • C for 15 s, and 72 • C for 30 s. Construction of Plasmid DNA The GATEWAY™ conversion technology (Invitrogen, Gaithersburg, MD, USA) was used in the experiment. To generate the FvNST1b overexpression vector, full-length FvNST1b DNA (1541 bp) was amplified and inserted into the PDONR221 vector under the treatment of BP Enzyme (Invitrogen). The entry vector DNAs were transformed into Escherichia coli DH5α cells and sequenced. The PDONR221-FvNST1b was treated with LR Enzyme (Invitrogen) and cloned into the PGWB505 vector containing the GFP reporter gene to generate PGWB505-FvNST1b-GFP. Transient Expression of FvNST1b in Nicotiana Benthamiana Leaves and Sub-Cellular Localization Analysis PGWB505-FvNST1b vector was introduced into Agrobacterium tumefaciens strain GV3101 by thermal shock in liquid nitrogen. Transformed bacteria were plated on a selective medium yeast mold agar containing kanamycin, hygromycin, and rifampicin at a final concentration of 100 µg/mL each. Resistant colonies were analyzed by PCR for the presence of full-length FvNST1b gene using the primers mentioned above. A positive colony was cultured in selective LB liquid medium and incubated at 28 • C until an O.D.600 between 0.8 and 1.0, and then the cells were resuspended in infection buffer and shaken for 2 h at 28 • C. A 1-mL syringe was used to inject the agrobacterium suspension into the abaxial face of young tobacco leaves (two weeks old), and samples were analyzed after two days of infiltration. Subcellular localization of FvNST1b in transient-transformed leaf samples was analyzed through visualization of the tissue under a confocal fluorescence microscope (Leica Confocal microscope SP8X; Leica Microsystems GmbH, Wetzlar, Germany ) with a 10× objective lens, a 488 nm laser from tunable white light laser for excitation, and a 499 nm to 551 nm bandwidth for detection. Gene Expression Level Analysis Total RNA from the strawberry samples was extracted using the polysaccharide and polyphenolics-rich RNAprep Pure Kit (Tiangen, Beijing, China); cDNA was synthesized from total RNA using the PrimeScript RT reagent Kit (Perfect Real Time) (Takara). Total RNA of Arabidopsis was extracted using the PLANT RNA Kit (omega). The cDNA samples were diluted 1:5 with water; 2 µL of the diluted cDNA was used as a template for quantitative real-time PCR (qRT-PCR) analysis. Real-time quantitative PCR was performed in the ABI 7500 Real-Time PCR System (Applied Biosystems, Waltham, MA, USA) using SYBR Premix Ex Taq II (Takara). The PCR program included an initial denaturation step at 95 • C for 3 min, followed by 40 cycles of 10 s at 95 • C, and 30 s at 57 • C. Each sample represented three biological replicates; each of them included four technical replicates. The relative expression levels of target genes were calculated with the formula 2 −∆∆CT in strawberry and 2 −∆CT in Arabidopsis. Arabidopsis Transformation Agrobacterium tumefaciens strain mentioned earlier were transformed into wild-type Arabidopsis plants using the floral dip method. Transgenic seedlings were selected on half-strength Murashige and Skoog (MS) agar plates containing 50 mg/L hygromycin and 200 mg/L Timentin; antibiotic-resistant plants were then tested by GFP signal and separation ratio to confirm the presence of the transgene. Four independent lines of the T3 generation were randomly chosen for further analysis. Transient Overexpression of FvNST1b in Strawberry Fruit Agrobacterium tumefaciens strain GV3101 mentioned above was used to perform transient expression analyses in strawberry fruits [64]. For Agrobacterium infection, the Agrobacterium suspension was injected into the fruit using a syringe of 1 mL capacity. To do this, the needle tip was inserted into the fruit center from the top, and then the Agrobacterium suspension was slowly and evenly injected into the fruits until the strawberry fruit was completely infected. After the infection, the fruits were incubated under the conditions required for the different experimental aims. The effect of overexpression was evaluated by examining the changes in both reporter gene expression and related phenotypes after Agrobacterium infection. Fruit Sections and Staining The infected strawberry fruits were embedded in 10% agarose gel at 7 days after, and 200 µm thick sections were cut with a vibratome. Strawberry fruit sections and Arabidopsis seedlings were fixed with 4% PFA for 60-120 min at 23-25 • C temperature with vacuum treatment. After fixation, the materials were washed twice for 1 min in 1 × PBS and moved to the clearing solution. After rinsing in 1 × PBS, the plant material was transferred to the ClearSee solution [65] and cleared overnight at room temperature. We prepared 0.1% Auramine O in ClearSee solution and the materials were stained overnight. Then, the materials were washed for at least 1 h with gentle shaking. The materials were transferred to 0.1% Calcofluor White in ClearSee solution and stained for 30 min; the materials were washed in ClearSee for 30 min with gentle shaking. Materials were analyzed with Leica TCS SP8X inverted confocal microscope. Imaging Calcofluor White was performed by a 405-nm diode laser for excitation and detection band width at 425-475 nm. Imaging Auramine O was performed with 488 nm from a tunable white light laser and band width detected at 505-530 nm.
5,992
2022-10-30T00:00:00.000
[ "Biology" ]
Basic Tutorial on Sliding Mode Control in Speed Control of DC-motor One technology to support production speed is electric motors with high performance, efficiency, dynamic speed and good speed responses. DC motors are one type of electric motor which is used in the industry. Sliding Mode Control (SMC) is the robust nonlinear control. The basic theory regarding SMC is presented. The SMC design which is implemented is the speed control of the DC motor is analyzed. The controller is implemented in simulation using MATLAB / Simulink environment. The step response and signal tracking test unit are carried out. The results show that SMC has a better performance compare to PID which is faster settling time and no overshoot and undershoot. Keywords—railway, traction motor, motor control, MRAS I. INTRODUCTION One of the technologies used to accelerate the production process in the industry is an electric motor. Electric motor based on the electric current is divided into two, namely DC motors and AC motors. DC motors were first discovered and are still widely used today because of the ease in the control system. However, DC motors require more expensive maintenance costs than AC motors because of the brush that can wear out. Electric motor applications in the industrial world include electric cars, robotic actuators, paper machines and home applications [1]. PID is one of the classic control methods that are still widely used. Based on [2][3] 90% of industries still use PID because of the advantages of being simple and applicable. However, one disadvantage of this method is that performance decreases if the plant is non-linear [4]. Sliding Mode Control (SMC) is one controller that can handle plant nonlinearity conditions. R.K. Munje et.al. [5] said that the advantages of SMC are robustness, ability to deal with non-linear systems, time-varying systems, it can be designed for fast dynamic responses and good abilities over the wide range. DC motor will have electrical parameters changes as the temperature increase, current, and voltage fluctuations, timevarying loading conditions, driving and operating conditions [6]. These changes are making the DC motor has non-linear characteristics. Therefore, the non-linear control method is needed. In this research, the SMC method will be applied in speed control of DC motor. Classical PID is used to compare the performance of SMC. This paper is organized as follows. Section II present the theoretical review of SMC which mentioned before. In section III, design of SMC for speed control of DC motor is presented. In section IV, the simulation results are discussed. Finally, the conclusion is in section V. II. SLIDING MODE CONTROL The SMC works by bringing the state from the system to the sliding surface and then to the central point, as illustrated in Figure 1 (a). Next Figure 1 (b) shows the switching of the control signal tracking the sliding surface to the origin. Sliding surface is a condition where the switching function (s) is zero (s = 0). On the top and bottom of the sliding surface, there is a limiting switching value, ± Δ. If a state x (t) is x (t)> Δ then switching will be off, otherwise if x (t) <-Δ then switching will be on. System reach sliding surface by making transition between stable and unstable trajectories and error converge to zero in sliding surface [7]. The aim of the SMC is that the output of the system is to track to the desired reference and to produce control signals which make minimum tracking error. The control signal in the SMC consists of two parts, namely in reaching mode and sliding mode. Reaching mode is to bring the state of the system to a sliding surface, switching control (usw) is required. Furthermore, in the sliding mode state, there is an equivalent control (ueq) to keep the state system stable. The sliding mode control signal can be written as equation (1), while switching control defined as (2). Figure 2 shows the SMC diagram where the control signal consists of equivalent control and switching control. where K > 0 is selected sufficiently large. The larger value of K, the faster the trajectory converges to the sliding surface. SMC has the disadvantage of switching from discontinuous control. There are many methods to reduce chattering. One can consider pseudo sliding with smooth control action [8]. where δ is a small tuning scalar called as tuning parameter to reduce chattering. III. SPEED CONTROL OF DC MOTOR A. DC Motor Model There is a lot of DC motor model which can be found in references as in [9] [10]. In this research, the real DC motor is modeled using MATLAB System Identification. The inputoutput which is voltage and speed is taken using Arduino data logger. The data are processed using MATLAB and the secondorder transfer function of the DC motor is found as in (4) [11]. Figure 1 shows the DC motor and controller as the plant. Then the system can be converted into the following canonical form ̇2 =̈= −1.48 2 − 12.81 1 + 138.5 (8) Now, the equivalent control and switching control can be designed where is the speed reference and C > 0 is performance parameter which guaranteed the stability of the system [12]. On sliding surface = 0 → ̇= 0 (12)  Equivalent control can be found by substituting equation (8) Simulink model of the control system shown in Fig.4. Fig. 4(a) is the Simulink model for the PID model that will be compared to the SMC performance. While the parameters' value of PID is chosen by trial and error. While Fig. 4(b) is the Simulink model for SMC control. A. Unit Step Response The unit step response is the basic testing to know the performance of the controller including settling time and overshoot. Fig. 5 shows the step response of the system which is PID and SMC control. It clearly is shown that PID has a faster rise time. However, its settling time is lower due to oscillation before reaching the steady state. On the other hand, SMC has a lower rise time without oscillation and reach steady-state condition faster than PID. Both controller response has zero steady-state error. Detailed controller performance parameters can be seen in Table 1. It clearly is seen that the PID response has a big overshoot and undershoot while SMC did not have. B. Signal Tracking The next test is signal tracking. In this test, setpoint changes are given to the system. Fig. 6 show the result of the test. It can be seen, the same result as unit step response, PID has a faster rise time. However, it oscillates before reaching steady-state condition and makes settling time become longer. When the set point reduces, PID also has oscillated before steady, while SMC has not. Detail controller parameters can be seen in Table 2 for the second set point because the first set point is the same as the unit step response, Table 1. It also can be seen, when set point decreases, SMC did not have both overshoot and undershoot. On the other hand, PID still has a big overshoot and undershoot. V. CONCLUSION SMC is the robust non-linear control. The basic theory regarding SMC is already discussed. The SMC implementation is speed control of DC motor also clearly explained. The SMC design implemented in simulation using MATLAB/ Simulink environment. A unit step response and signal tracking test are carried out. The result shown that SMC has better performance compare to PID which is faster settling time and no overshoot and undershoot.
1,752.8
2020-04-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Internally driven large‐scale changes in the size of Saturn's magnetosphere Abstract Saturn's magnetic field acts as an obstacle to solar wind flow, deflecting plasma around the planet and forming a cavity known as the magnetosphere. The magnetopause defines the boundary between the planetary and solar dominated regimes, and so is strongly influenced by the variable nature of pressure sources both outside and within. Following from Pilkington et al. (2014), crossings of the magnetopause are identified using 7 years of magnetic field and particle data from the Cassini spacecraft and providing unprecedented spatial coverage of the magnetopause boundary. These observations reveal a dynamical interaction where, in addition to the external influence of the solar wind dynamic pressure, internal drivers, and hot plasma dynamics in particular can take almost complete control of the system's dayside shape and size, essentially defying the solar wind conditions. The magnetopause can move by up to 10–15 planetary radii at constant solar wind dynamic pressure, corresponding to relatively “plasma‐loaded” or “plasma‐depleted” states, defined in terms of the internal suprathermal plasma pressure. Introduction The interaction between the solar wind and the magnetic field of a planetary body gives rise to the formation of a magnetosphere, which encloses the planet and shields it from direct bombardment by plasma of solar origin. The magnetopause is the boundary that separates these populations and it forms where the solar wind dynamic pressure is balanced by internal pressure sources when the boundary is stationary. In reality, however, the pressure on either side of the boundary is highly dynamic and the magnetopause is in almost continual acceleration [e.g., Kaufmann and Konradi, 1969]. At Earth, the principal internal pressure source is the magnetic pressure. Saturn differs in this regard. Measurements made by Voyagers 1 and 2 found that energetic plasma is ubiquitous within Saturn's magnetosphere [Krimigis et al., 1982[Krimigis et al., , 1983. Later, early in the Cassini mission, it was found that Enceladus ejects plumes of water group molecules into Saturn's magnetosphere [e.g., Dougherty et al., 2006;Porco et al., 2006]. A small fraction of these are ionized into a plasma, and this can greatly influence the dynamics that drive the magnetosphere. Estimates vary substantially but Bagenal and Delamere [2011] find that the plasma source rates lie between 12 and 250 kg s −1 . Similarly, Io is a large source of plasma within Jupiter's magnetosphere with typical plasma source rates exceeding those of Enceladus in absolute terms by at least an order of magnitude. However, Vasyliunas [2008] showed that Enceladus may be a more significant plasma source to Saturn's magnetosphere than Io is to Jupiter's magnetosphere because, in relative terms, it may cause flux tubes to become more heavily loaded with mass and hence perturb Saturn's magnetic field more strongly. At Saturn's magnetopause, the pressure associated with the suprathermal component of this plasma is of the same order as the magnetic pressure and acts to inflate the magnetosphere, significantly increasing its size beyond what would be expected of the magnetic pressure alone. Sergis et al. [2007Sergis et al. [ , 2009 found that the plasma sheet extends all the way out to the dayside magnetopause boundary and that the plasma at Saturn (the ratio of plasma to magnetic pressure) for ions with energies greater than 3 keV, at radial distances concurrent with the magnetopause, varies between ∼10 −2 and 10 1 . Plasma dynamics are thus likely to have a significant impact on the size and shape of the Kronian magnetopause due to the highly variable nature of just inside. Previous empirical studies have treated the solar wind dynamic pressure as the primary source of variability in the location of the magnetopause. However, magnetohydrodynamic (MHD) studies of the Kronian magnetosphere [e.g., Zieger et al., 2010] found that internal plasma dynamics can change the geometry of Saturn's magnetopause significantly under conditions when the solar wind pressure is low. Moreover, no steady state magnetopause boundary is obtained in these simulations under low solar dynamic pressure conditions. Here it will be shown that internal plasma dynamics imparts a similar degree of variability to the location of Saturn's magnetopause as does variability in the solar wind pressure. In addition to this aspect, previous studies are expanded upon by including high-latitude observations of Saturn's magnetopause in both hemispheres and near-equatorial observations of both the morning and the evening sectors, providing much greater coverage of the dayside magnetopause. Furthermore, a more sophisticated fitting routine is used and a new method of calculating the perpendicular distance between the crossing and the model surface near-exactly is presented in order to fit an empirical model to these data more accurately. A more realistic estimate for the thermal ion pressure at the magnetopause is also calculated. In section 2, previous empirical models of Saturn's magnetopause and the improvements made in this study are outlined, in section 3 the in situ magnetopause observations are discussed, and in section 4.1 the results of fitting the model to the Cassini data are presented and discussed. In section 4.2, a substantial enhancement is made to the empirical model in order to address a major shortcoming in its application to magnetospheres with significant internal plasma sources. These results are further discussed and summarized in section 5. Previous Work The Shue et al. [1997] empirical shape model was originally devised to model the terrestrial magnetopause, where r is the distance from the planet center to the point on the magnetopause surface described by the angle , the angle between the position vector of this point and the planet-Sun line. The surface is parameterized in terms of the standoff distance, r 0 , which controls the size of the magnetosphere, and the "flaring" parameter, K, which controls the downstream shape as shown in Figure 1. As well as the solar wind dynamic pressure, D P , Shue et al. [1997] also presented forms of the magnetospheric standoff distance and the flaring parameter that depend on the orientation of the interplanetary magnetic field (IMF). Dayside magnetic reconnection is most efficient when the IMF and planetary magnetic fields are antiparallel so, as a result, extended periods of southward IMF cause erosion of the dayside magnetopause due to enhanced reconnection [e.g., Aubry et al., 1970]. 10.1002/2015JA021290 This model was applied to Saturn's magnetopause by Arridge et al. [2006] using observations from the first six orbits of the Cassini spacecraft, along with the flybys of Voyager 1 and 2. The coefficients a i were determined using an interior reflective Newton-Raphson fitting routine to fit the model to a set of Cassini observations. Coefficients a 1 and a 2 set the size scale and the compressibility, or response to changes in D P , of the system. The IMF dependency was omitted because it could not be measured in the absence of a dedicated upstream monitor close to Saturn. More recently, MHD simulations by Jia et al. [2012] have found that the magnetosphere is insensitive to changes in the IMF. In the absence of a dedicated upstream dynamic pressure monitor, D P was calculated by balancing its normal projection with the interior magnetic pressure adjacent to the magnetopause current layer. A Newtonian pressure balance equation from aerodynamic studies of supersonic flow around a body was used, which was first applied in the context of magnetospheric physics by Petrinec and Russell [1997]: where B is the magnitude of the interior magnetic field and 0 is the permeability of free space. Ψ is the angle between the direction opposite to the upstream solar wind velocity, assumed to be along the Sun-planet line, and the normal to the magnetopause surface at the observation location. P 0 is the static (thermal) component of the magnetosheath pressure. The constant factor k relates to the divergence of streamlines of flow around the magnetosphere, which acts to reduce the dynamic pressure. In the high Mach number regime appropriate for Saturn [e.g., Slavin et al., 1985], a value of k ∼ 0.88 is applicable as shown by Walker and Russell [1995]. It can readily be seen that close to the standoff point, where Ψ →0 ∘ , the dynamic pressure dominates, but away from the "nose" of the magnetosphere, where Ψ →90 ∘ , the dynamic pressure term reduces to zero and static pressure dominates in the lobes. Kanani et al. [2010] used the same empirical model but improved on previous studies by using more magnetopause observations. They also used measurements from the Magnetospheric Imaging Instrument (MIMI) [Krimigis et al., 2004] and the Electron Spectrometer (CAPS-ELS) [Young et al., 2004] to estimate the suprathermal magnetospheric plasma and electron pressures respectively. Together with the magnetic pressure and the assumption of pressure balance across the magnetopause, a more realistic estimate of the dynamic pressure was obtained. In addition, P 0 was expressed as a function of D P as previous estimates were too small to be consistent with MHD simulations. Furthermore, if P 0 is kept constant but exceeds a critical value, imaginary flow velocities are introduced. As a result, Kanani et al. [2010] proposed the following modified pressure balance condition across the magnetopause boundary, where k b is the Boltzmann constant, m p is the mass of a proton, T SW and u SW are the solar wind temperature and velocity respectively, for which values of 100 eV [Richardson, 2002] and 460 km s −1 have been assumed for the present study, P MIMI is the pressure contribution of suprathermal water group ions (see Sergis et al. [2009] for details] and P ELS is the thermal electron pressure contribution. The constant factor 1.16 accounts for a 4% density abundance of He + in the solar wind with a temperature approximately 4 times greater than that of the protons [Robbins et al., 1970]. Kanani et al. [2010] found that the dynamic pressure is insensitive to the values assumed for T SW and u SW since the second term of equation (5) is much smaller than the first term for any reasonable values of solar wind parameters and for almost the full range of Ψ. The empirical model described above is capable of representing both open and closed magnetospheres but is axisymmetric about the planet-Sun line. Lin et al. [2010] modified this functional form to allow both north-south and east-west asymmetries and fitted it to magnetopause crossings from many different spacecraft in orbit around the Earth using a Levenberg-Marquardt solver in several stages. Even with these modifications, they found that, globally, the Earth's magnetopause is largely axisymmetric at equinox but the local structure changes substantially in the cusp regions. The physical properties of Saturn and Jupiter and their magnetospheres relative to the Earth (e.g., high rotation speeds, internal plasma sources, and magnetospheric size scales) imply that the internal dynamics taking Journal of Geophysical Research: Space Physics 10.1002/2015JA021290 place at these systems are significantly different to those within the Earth's magnetosphere. It follows that the geometry of Saturn's magnetopause is likely to be significantly different to the terrestrial magnetopause. Pilkington et al. [2014] explored the high-latitude structure of the Kronian magnetopause using the first set of highly inclined orbits of Cassini from 2007 to 2009. They also used equation (5) to estimate the dynamic pressure but omitted the thermal electron pressure moments as Kanani et al. [2010] found that they were, on average, 2 orders of magnitude smaller than the suprathermal ion pressure moments and, hence, negligible in this context. Pilkington et al. [2014] also considered the pressure contribution associated with the centrifugal force at the magnetopause using the magnetodisc model of Achilleos et al. [2010a] but found that this was also negligible compared to other contributions represented in equation (5). After identifying a departure between the observed locations and the locations predicted by the axisymmetric magnetopause model, Pilkington et al. [2014] modified the model in order to incorporate polar confinement by applying a simple dilation along the Z KSM axis by a factor . They used high-latitude magnetopause crossings to determine which value of  provided the most statistically significant fit to this data and consequently found that flattening the surface by ∼19% along the Z KSM axis provided the best fit. Their data, however, were restricted to the northern hemisphere on the duskside of the planet. In addition, Kivelson and Jia [2014] studied the Kronian magnetosphere using MHD simulations and identified a dawn-dusk asymmetry in the average extent of the magnetopause. This is not incorporated into the current work but will be the subject of a future study. This Study In this study, the empirical surface described by equations (1)-(3) is modified by incorporating polar flattening by simply reducing the extent of the magnetopause along the north-south direction by a scaling factor  as done by Pilkington et al. [2014]. This is included as a free parameter when fitting this surface to the set of data described in section 3. The ultimate aim is to determine the set of coefficients a i and  that minimize the distance between the observed magnetopause and the location predicted by the model for each magnetopause crossing. After calculating the crossing-surface distance for each crossing using the method described in Appendix B, the root-mean-squared (RMS) residual is calculated and is minimized until it reaches a tolerance of 10 −6 R S . The first stage of this procedure involves estimating the dynamic pressure at the time of each magnetopause crossing, at which the model surface will be constructed. The same method is employed to estimate the dynamic pressure as in the studies described above, but we also estimate the pressure contribution from the water group ion population with energies <45 keV (which we define as "low energy"). Kanani et al. [2010] accounted for the pressure associated with low-energy protons by assuming that their number density is 20% of the low-energy electron density and, hence, that they have a pressure contribution equivalent to 20% of the electron pressure assuming equal temperatures. However, the pressure associated with the water group ions within this energy range was not included by Kanani et al. [2010]. Thomsen et al. [2010] surveyed the properties of the low-energy ion population using the CAPS ion mass spectrometer. They found that beyond L ∼11 R S the pressure associated with the thermal water group ion population at the rotational equator is comparable to the suprathermal contribution, in agreement with the results of Sergis et al. [2010]. To obtain an upper limit estimate of the additional contribution made to the magnetospheric pressure by the thermal ions, we make use of the same data as used by Thomsen et al. [2010] with equatorial pressures binned by L. But instead of the bin averages [cf Thomsen et al., 2010, Figure 12], the maximum pressures found in each bin are fitted to. The resulting upper limit profile is given by, where P e is the equatorial pressure measured in nanopascal at the center of the plasma sheet and L is the distance between the planet center and the equatorial crossing of the dipole field line that passes through the point of interest. The energy ranges of the CAPS and MIMI instruments overlap between 3 and 45 keV, and pressure moments derived from the latter are also used in this study, meaning that the pressure contribution for ions in the overlap region may be counted twice. However, Sergis et al. [2010] found that the overestimation of the total pressure due to this overlap is generally less than 25% as the sensitivity of the CAPS instrument drops as it approaches its upper limit detection threshold. This is small compared to the scatter in the data. 10.1002/2015JA021290 To account for the strong centrifugal confinement of the thermal plasma near the current sheet, the equatorial pressure (equation (6)) is scaled with height above the spin equator, z, in the same way as Hill and Michel [1976], where H is the ion-scale height at Saturn's magnetopause, which was found to be ∼5R S for W + at L ∼ 17 [Thomsen et al., 2010]. Arridge et al. [2008Arridge et al. [ , 2011 found that the plasma sheet is deflected out of the spin equator as a function of planetary season due to solar wind forcing and that it oscillates about this mean position in phase with the magnetic oscillation [e.g., Andrews et al., 2008]. To determine the effective value of z in equation (7) for each of the magnetopause crossings in our study, we reference the spacecraft position with respect to the expected location of the current sheet, given by Arridge et al. [2011]: where z CS is the displacement of the current sheet away from the spin equator, is the cylindrical distance from Saturn measured in the equatorial plane, r H is the characteristic distance where current sheet "hinging" begins and SUN is the subsolar latitude. 0 is the distance at which the plasma sheet becomes tilted, TILT is the tilt angle of the plasma sheet and, finally, Ψ PS is the phase of the plasma sheet oscillation. The hinging distance has been taken to equal the standoff distance of the magnetopause surface that passes directly through each crossing location as suggested by Arridge et al. [2008]. Values for TILT (7.0 ∘ ) and 0 (10 R S ) were chosen in order to maximize the displacement of the oscillating current sheet while remaining consistent with the results of Arridge et al. [2011]. The current sheet was chosen to be centered on any magnetopause crossing where its combined hinging and oscillation could cause it to move to such a position, thus maximizing P cold . Hence, equation (5) then becomes The upper limit P cold that is used here is comparable to but smaller than P MIMI , in general, but P cold ∕P MIMI ≪ 1 for the high-latitude crossings as anticipated. Including the P cold term provides a small improvement to the fitting RMS residual, but the parameters derived from fitting the empirical model to the data set described in section 3 are insensitive to its inclusion (within the fitting uncertainties at the 2 level). In Situ Magnetopause Observations The Kanani et al. [2010] study covered magnetopause crossings of the Cassini spacecraft from before Saturn Orbit Insertion (SOI, July 2004) up until January 2006, during which time the spacecraft sampled the low-latitude magnetopause up to ∼40 R S beyond the terminator on the dawn side of the planet. Pilkington et al. [2014] covered from early 2007 to the end of 2008, during which the spacecraft sampled the high-latitude magnetopause in the northern hemisphere on the duskside of the planet but had far poorer coverage of the equatorial magnetopause. We have reidentified crossings during the interval covered by Kanani et al. [2010] and, in general, find very good agreement with the original analysis. The present study utilizes the data covered by Kanani et al. [2010] and Pilkington et al. [2014] and extends it such that crossings from 28 June 2004 (just prior to SOI) to 29 October 2010 and from 13 May 2012 to 8 February 2013 is covered. The latter period was added because the high-latitude magnetopause in the southern hemisphere was sampled during this time, and coverage was extended from the conclusion of the Pilkington et al. [2014] study to late 2010 in order to attain better coverage of the equatorial magnetopause on the dawn side of the planet. These trajectories are shown in Figure 2. Pilkington et al. [2014] analyzed the trajectory of the spacecraft to ensure they had adequate sampling, such that their results were not biased by observations of extreme magnetopause configurations. That exercise is not repeated herein, but their results are used in order to reduce the data to avoid bias where necessary. Specifically, Pilkington et al. [2014] found that they had good sampling of the high-latitude magnetopause for X KSM ≥ 2.5 R S . It should also be noted that since this study spans a sizeable fraction of a Kronian year, seasonal variability in the magnetopause geometry is now an issue of which to be wary. Specifically, Maurice et al. [1996] and Hansen et al. [2005] found a significant north-south asymmetry in the magnetopause geometry under conditions where the magnetic dipole is not orthogonal to the direction of solar wind flow. Such a situation occurs over the majority of the Kronian year and they are only truly orthogonal at equinox. This is thus expected to affect the location of the high-latitude magnetopause crossings. However, in the current study, all high-latitude observations were made at similar hemispheric season since the crossings in the northern and southern hemispheres were separated by roughly 6 years. The magnetic dipole was titled away from the Sun by ∼10-14 ∘ in the northern hemisphere in 2007 when the high-latitude observations were made in that region. Similarly, in the southern hemisphere the dipole was tilted away from the Sun by ∼14-17 ∘ in 2012-2013 when the high-latitude observations took place there. As such, one may expect the degree of polar flattening to be similar in both hemispheres. Indeed, if the empirical model outlined in section 2.2 is fitted to crossings in each hemisphere separately as outlined in Appendix A, the same degree of polar flattening is retrieved within the fitting uncertainties. It is thus assumed for this particular data set that it is appropriate to fit a single empirical model describing polar flattening using a single free parameter. However, this effect will be further quantified in a future study. Data from the Cassini Fluxgate Magnetometer (MAG) [Dougherty et al., 2002] and CAPS-ELS were used to identify magnetopause crossings. Some of these observations are shown in Figure 3. The internal pressure was estimated for each crossing by summing field and plasma pressures just inside the magnetosphere, averaged over a time interval no smaller than 20 min in duration, as they can be highly variable. In total, 1607 magnetopause crossings were identified. Of these, MIMI pressure moments were unavailable for 93, leaving 1514. In previous studies, crossings closely separated spatially and in time were averaged together to prevent artificial weighting due to boundary motions [e.g., Slavin et al., 1983;Arridge et al., 2006;Kanani et al., 2010]. We point out here that due to the underlying assumption of pressure balance across the magnetopause, this When the spacecraft passes from the magnetosheath to the magnetosphere, these are typically characterized by a sharp increase in the field strength and usually a rotation in the field components. The field is usually much more variable in the magnetosheath too. The MAG data shown here has a resolution of 1 min but is smoothed using a moving average filter with a span of 11 min. A sharp decrease in the electron count rate (proportional to density) is also observed in addition to a sharp increase in the average electron energy. practice could, in fact, be detrimental to the study and could reduce the accuracy of the model. This is because the magnetopause moves much faster than the spacecraft (to zeroth order, it can be assumed that the spacecraft is stationary with respect to the magnetopause). As a result, if the magnetopause is observed on multiple occasions within a short period of time, it is likely to be close to equilibrium because, otherwise, it would be observed just once as it moves rapidly past the spacecraft. So, in that sense, not performing this averaging could improve the study as, essentially, measurements where pressure equilibrium is a good assumption would be (slightly) more highly weighted. Furthermore, Jia et al. [2012] found using MHD simulations that even under steady solar wind conditions the magnetopause experiences periodic movements. Temporal variability of the magnetopause under such conditions will be preserved by using the full set of data without averaging. For completeness, the effect of averaging on the results of this study were investigated by averaging crossings on the dawn and dusk sides within 5 and 3 h of each other, respectively, in accordance with the study of Saturn's boundary motions by Masters et al. [2012]. In practice, if two crossings were observed within this period the one with the poorer statistics was discarded. In some cases, the estimated dynamic pressure between the two observations was significantly different, which added additional scatter to the data when the quantities were averaged. Since such crossings are close together both temporally and spatially, averaging their positions makes very little difference to their locations. After averaging, 737 crossings remained with which to fit the model. It was found that averaging had no significant effect on the fitting results presented in later sections, so the results fitted to the entire data set without averaging (i.e., 1514 crossings) are presented. The only difference between the two methods was the magnitude of the uncertainties in the fitted model parameters, which, of course, are smaller when the full data set is used. The full data set is displayed in Figure 4 in the Kronocentric Solar Magnetospheric (KSM) system, where the X KSM axis is along the planet-Sun line directed toward the Sun, the Z KSM axis is oriented such that the planetary magnetic dipole lies within the X KSM − Z KSM plane, and the Y KSM axis completes the right-handed set and is, hence, directed from dawn to dusk. The spacecraft positions were calculated using the reconstructed trajectory kernels of NASA's Navigation and Ancillary Information Facility "SPICE" geometry information system. Initial Results The initial results of fitting the model to all crossings simultaneously are shown in Figure 5 as the black confidence ellipses, along with the results of previous studies, all at the 2 level. The technical details regarding the fitting methodology and a significant improvement made over previous studies are detailed in Appendices A and B, respectively. The uncertainties have been estimated in two different ways as discussed in Appendix C, though both methods give similar results in this case. Most coefficients are in agreement with previous studies within the fitting uncertainties, but for the coefficients a 1 and a 2 there is a significant disagreement with previous studies. Coefficient a 1 defines the scale size of the system and a 2 defines the compressibility of the magnetosphere-how strongly it reacts to variations in dynamic pressure. We found that a 2 = 1∕(7.8 ± 0.4), which apparently indicates that the magnetosphere is very "stiff" and relatively unresponsive to changes in dynamic pressure. A value of one sixth is expected for a dipole magnetic field and is usually considered appropriate in the case of the Earth [e.g., Shue et al., 1997]. A value larger than this is expected for plasma-laden systems such as those of Saturn and Jupiter. For example, Kanani et al. [2010] find a 2 = 1∕(5.0 ± 0.8) for Saturn and Huddleston et al. [1998] find a 2 = 1∕(4.5 ± 0.8) for Jupiter. In this context, our value does not seem physically feasible, at least when predicting the nominal response of the magnetosphere to changes in dynamic pressure. A slightly different approach to obtaining both a 1 and a 2 is to take the logarithm of equation (2) and rearrange this to form a linear relationship, where D P is estimated assuming pressure balance as usual and r 0 can be found for each crossing by fitting the surface directly through each crossing. Coefficients a 1 and a 2 can then be obtained from the resulting line of best fit when these quantities are plotted. The subtle difference between this method and the global fitting method is that in this case, the surface passes directly through each magnetopause crossing. In the previous method, the surface was constructed at D P , and did not necessary pass directly through each magnetopause crossing. In fact, it is the distance between the surface constructed at D P and the crossing location that is used to assess how well the model fits the data as described in Appendix B. The results of these two methods of estimating a 1 and a 2 are shown in Figure 6a. Reassuringly, both methods give the same results within the uncertainties. Interestingly though, there appears to be substantial scatter above the main body of the data, whereas there is relatively little below. Figure 6b shows the same data colored by log , where is the ratio of the total plasma pressure to the magnetic pressure. This parameter shows a remarkable trend with system size. It shows that the location of the magnetopause is affected dramatically Figure 5. Shows the coefficients obtained by fitting magnetopause surfaces to magnetopause crossing data, all results are displayed at the 2 level. The colored bars are the results of previous studies while the confidence ellipses indicate the result of this study using the usual standoff distance power law (black) and the new -dependent power law (magenta). See Table 1 for precise values of the coefficients. Note that the uncertainties are much smaller in this study due to the improvements made in the fitting procedure and the large amount of data used. by the plasma conditions adjacent to it, such that the extrapolated standoff location can vary between 10 and 15 R S between low and high conditions at constant D P . As such, the standoff distance power law, equation (2), used to model the size of the magnetosphere is not valid as a single one-dimensional power law cannot account for this variability due to changing . This is much larger than the magnetopause oscillations observed by Clarke et al. [2010], typically of amplitude ∼1.2 R S but occasionally as large as ∼4-5 R S . However, a similar degree of variability in standoff location has been identified under low (<0.005 nPa) solar wind dynamic pressure conditions by Jia et al. [2012] using MHD simulations. Furthermore, Figure 7 shows that and r 0 are strongly correlated, and this correlation increases with D P . There is only a very weak correlation between these quantities within the smallest dynamic pressure group as there are very few high crossings within this group (Figure 6b). A possible explanation for this is that the Cassini orbit usually lies inside the magnetopause when it is greatly expanded, so crossings under conditions of high interior and low dynamic pressure frequently cannot be measured; hence, the correlation between and r 0 is low in these situations. Essentially, a detection threshold is reached whereby the magnetopause can only be sampled when it has a standoff distance smaller than 40 R S as such a magnetosphere could easily extend to 90 R S in the terminator plane. A similar trend is evident between r 0 and the total plasma pressure, though it is weaker than the aforementioned trend between r 0 and . The most likely reason for this is that if the magnetic field is strong enough, it can suppress the expansion of the system since the plasma pressure must be strong enough to change the magnetic field configuration since they are frozen together. The parameter, on the other hand, describes what is controlling the system-is the magnetic field sufficient to confine the plasma or is the plasma pressure strong enough that it can reshape the system and significantly perturb the magnetic field? In the first instance, one could repeat the analysis over small intervals of to identify how the system scales under different internal conditions. There are many different methods one could use to split the data; here, a k-means clustering algorithm is used to separate the data as naturally as possible but in reality is continuous and any small interval of could be chosen provided that it contains enough crossings. This algorithm has been used to separate the data into three intervals of and separate best fit lines were fitted through each cluster as shown in Figure 8. In each case, the magnetospheric compressibility remained the same within the estimated uncertainties and was 1∕(5.5 ± 0.2) on average, in agreement with Kanani et al. [2010]. However, a 1 , which scales the size of the magnetosphere, changed between clusters well outside of the uncertainties and in the same sense as the average value of for each cluster. This indicates that the magnetosphere can exist in a relatively plasma-depleted or plasma-loaded state as indicated schematically in Figure 9. Two different methods are used to find the coefficients a 1 and a 2 (1∕ ) as described in the text. Within the uncertainties, both methods give the same results and find that a 2 is much smaller than expected for Saturn. (b) The same data with a log color scale and shows that the plasma conditions inside the magnetosphere strongly affect the location of the magnetopause. Conceptually, this makes sense. Consider the simplified situation where the magnetosphere is initially in steady state such that the internal and external pressures are equal. If the interior plasma pressure then increases, the instantaneous will also increase and the magnetosphere will expand in order to reestablish equilibrium. Hence, even for a steady dynamic pressure there is a range of plausible standoff distances depending on the internal conditions as a result of gradual mass loading. The large fluctuations in the observed interior plasma conditions may be caused by quantized plasmoid loss as a result of Vasyliũnas-style reconnection in the magnetotail and the resulting planetward flow of energized plasma [Vasyliunas, 1983], or through interchange events as observed via energetic neutral atom imaging by the Ion and Neutral Camera on board Cassini [e.g., Krimigis et al., 2007;Brandt et al., 2010;Mitchell et al., 2015]. Both of these types of events lead to rapid changes in the interior plasma pressure and are expected to affect pressure balance at the magnetopause boundary as a result. The usual standoff distance power law cannot account for these effects as the scaling factor, a 1 , must change to compensate for the resulting change in the geometry of the magnetopause even while the solar wind pressure remains steady. What is unclear, however, is how changes as the system expands. Ultimately it depends on the rate of change of the magnetic field strength and the plasma pressure with respect to system size. A theoretical treatment of this process could be the subject of future work. Incorporating Into the Empirical Magnetopause Model The original standoff distance power law was derived in the context of the Earth's magnetosphere, which is relatively devoid of plasma at the magnetopause boundary. As such, it is a good approximation to assume that the solar wind dynamic pressure is balanced by the magnetic pressure alone. Of course, this is far from true of the magnetospheres of Saturn and Jupiter and will be addressed here. The dynamic pressure at the standoff point can be approximated as At this location, the magnetic field at the standoff point can be expressed as B = B 0 r −1∕2a 2 0 , where B 0 is the equatorial magnetic field at the surface of the planet. This power law is valid over a wide range of standoff Figure 7. The crossings have been separated into bins of log D P and the correlation between and r 0 has been calculated. In all cases, this correlation is positive and seems to increase with D P . Besides the smallest D P bin, the p value (the probability of such a correlation occurring by chance) is negligible. These correlation coefficients should be taken as lower limits as the D P bins are fairly coarse to ensure a representative number of crossings fall within each one. distance as found by Bunce et al. [2007] and Achilleos et al. [2014] but is affected by the magnetospheric plasma content, which causes a 2 to change. Hence, r 0 can be expressed as Note that strictly speaking, in equation (12) should be the plasma measured just inside the standoff point. In the absence of this information, the locally measured will be used in the first instance. The results of repeating the fitting procedure with the new standoff distance power law are shown in Figure 5 by the magenta ellipses at the 2 level. Now, all results are in agreement with previous studies and incorporating into the empirical model results in a decrease in the RMS residual by 0.8 R S , indicating a large increase in the accuracy of the model without the use of additional free parameters. In the first instance, this may appear puzzling because earlier analyses did not include the dependence, so they should agree better with the analysis performed without ? The explanation for this apparent paradox is that the data used in these earlier studies were confined to a region of the magnetosphere where is, in general, relatively small. The dependence is still present within these data but has a much smaller influence. For these data, the median ∼1.6 in comparison with ∼3.0 across the entire data set. The fact that using the local value leads to such a large increase in the predictive power of the model indicates that there may be a strong correlation between the local and nose . Indeed, adding an additional free parameter to scale the local to that expected at the standoff point improves the accuracy of the model by Figure 8. A k-means clustering algorithm has been used to separate crossings into three clusters based on . They have been split into groups based upon the plasma conditions prevalent at the time of the crossing. Lines of best fit have been fitted through these groups separately. Of particular note is that the factor that governs the size scale of the magnetosphere (a 1 ) increases with well outside of the uncertainties. This result is insensitive to the number of clusters into which the data are separated (the analysis has been attempted with up to seven). Figure 9. A schematic depicting two snapshots of the system under conditions in which the interior plasma pressure adjacent to the magnetopause is (a) low and (b) high, and the corresponding effects these conditions have on the magnetopause location. When is high, the plasma pressure dominates over the magnetic pressure and can change the magnetic field structure and push out the boundary. The white point indicates Enceladus, a large plasma source within the system. The magnetic field lines are distended radially outward when the hot plasma pressure is increased in the corresponding force balance within the magnetosphere [Achilleos et al., 2010b]. 0.3 R S with a scale factor of ∼0.4. However, after performing a F test on these models, it was found that the additional free parameter does not provide a statistically significant improvement to the predictive power of the model so will not be discussed further. The magnetospheric compressibility now agrees with previous studies though is more "Earth-like" (more dipolar) than previous studies. In addition, it was found that the dynamic pressure has only a very small effect on the magnetospheric flaring, so can be safely neglected in future studies with minimal loss of model accuracy. Achilleos et al. [2008] used magnetopause crossings observed between 1 July 2004 and 3 September 2005 to assess the long-term statistical behavior of Saturn's magnetosphere. They reported that the magnetospheric standoff distance, which is a proxy for the global size of the magnetosphere, exhibits a bimodal structure, meaning that there are two most likely standoff distances associated with the internal magnetospheric configuration. It is plausible that these "modes" correspond to measurements in which the magnetopause is caught in either a plasma-loaded or a plasma-depleted state with a relatively rapid transition between these states. Revisiting Bimodality This study is an ideal opportunity to revisit bimodality in light of the much larger data set that has been amassed. The standoff distance has been calculated for each magnetopause crossing by fitting the best fitting model described in Table 1 directly through each magnetopause crossing. This tends to be a more stable way of calculating the standoff distance than using equations (2) or (12) since information about all of the coefficients is used, and correlations between the coefficients mean that the standoff distances do not change much within the coefficient uncertainties. Figure 10 shows a histogram of empirical standoff distance with normal, lognormal (the best fitting example of a skewed distribution in this case), and bimodal distributions fitted to the data. Statistical tests have been used to determine which of these provides the best fit to the data. First of all, the Kolmogorov-Smirnov test [Massey, 1951] has been applied to test the null hypothesis that the data could have arisen from an underlying population that follows each distribution. Using this test, the normal distribution was overwhelmingly rejected with a negligible p value, which can be interpreted as the probability of obtaining a distribution at least as extreme as that observed provided that the null hypothesis is true. Since this probability is negligible, this test implies that the underlying standoff distance population is very unlikely to be normally distributed. The p value was much larger for the lognormal distribution but was still negligible (approximately one in a million). On the other hand, the p value for the bimodal distribution is ∼0.17. While this probability is still fairly low, it shows that the bimodal distribution is far more likely to describe the underlying population from which our data are drawn. Even so, the low probability indicates that the bimodal distribution is not able to capture the behavior of the magnetopause entirely. It is possible that the degree of skewness evident in the distribution could be the reason why the p value is still quite small for the bimodal distribution. Higher-order distributions can also be tested, such as a "trimodal" distribution, which yields a p value of 0.54. However, care must be taken not to overfit: the p value will asymptotically approach 1 as more free parameters are added to the model. To ascertain is this is the case, the Bayesian information criterion (BIC) [Schwarz, 1978] can be calculated. This is a measure of the information retained by the model while penalizing additional free parameters. The model that minimizes the BIC is the model that retains the most information about the distribution without introducing extraneous free parameters. In this case, the model that achieves this is a bimodal distribution with means at 20.7 and 27.1 R S with a mixing proportion of 43% and 57%, respectively. Joy et al. [2002] and Achilleos et al. [2008] found that such bimodal behavior could not be explained by the solar wind alone. This is supported by Jackman et al. [2011] who analyzed the solar wind conditions upstream of Saturn and Jupiter and found that the dynamic pressure distribution was best described by a single peak. This implies that the second peak in the distribution may be caused by internally driven plasma dynamics. Specifically, it may be symptomatic of the cycle of mass loading and unloading described Table 1 through the precise location of each magnetopause crossing. On top of this are plotted normal (red, dashed line), bimodal (green, solid line), and lognormal (magenta, dash-dotted) distributions fitted to these data. Previous analyses by by Vasyliunas [1983]. If the transition betweenthe loaded and unloaded states is rapid compared to the time that the system actually spends in each state, it stands to reason that the magnetosphere would be observed less often in this intermediate state. Summary: A Global Magnetopause Model Here the largest and most complete set of Kronian magnetopause crossings to date has been assembled, covering ∼7 years of the Cassini mission and sampling far more of the global surface geometry than ever before. Assuming balance between pressure sources internal and external to the magnetosphere, the solar wind dynamic pressure has been estimated and a pressure-dependent surface was fitted to the location of these crossings. Several key modifications were made to the fitting procedure over previous studies. First, a more sophisticated solver was used to explore parameter space efficiently and ensure that the set of parameters that correspond to the global minimum are found. Second, the distance between each magnetopause crossing and the empirical surface was calculated exactly. It was found that this made a significant difference to the degree of magnetopause flaring and smaller differences in the other parameters compared to the approximate method used in previous studies, which led to an increase in the accuracy of the model by ∼0.6 R S . Finally, the thermal ion pressure contribution was calculated more rigorously by exploiting the results of previous work and resulted in an additional increase in the model accuracy. The dynamic pressure alone was not enough to account for the variability in the size of the magnetosphere and, furthermore, the extra variability could be attributed to dynamic plasma processes inside the magnetosphere, which can cause the magnetosphere to expand by 10-15 R S at constant D P . This is much larger than the periodic oscillation of the magnetopause location with amplitude ∼1.2 R S as found by Clarke et al. [2010] but is consistent with MHD simulations which exhibit a similar degree variability under low solar wind dynamic pressure conditions [e.g., Jia et al., 2012]. This internal variability could be characterized in terms of the plasma just inside the magnetopause and, subsequently, this effect was incorporated into the global fitting routine by adding a dependency to the power law used in previous studies that relates the size of the magnetosphere to the dynamic pressure. This results in a substantial increase in the accuracy of the model's predictions, reducing the RMS residual by ∼ 0.8 R S (the model coefficients are displayed in Table 1). The internal variability described here may be associated with the build up and subsequent loss of plasma from the system and may explain why the sizes of Jupiter's and Saturn's magnetospheres exhibit bimodality as found by Joy et al. [2002] and Achilleos et al. [2008], respectively, and supported by these observations. In a future paper, we will attempt to resolve significant asymmetries in the structure of the magnetopause. Further studies should also look at more complex magnetopause structures, such as cusp indentation regions. Maurice et al. [1996] found that Saturn's magnetopause has significant cusp indentation regions that could be implemented into future empirical models as was done by Lin et al. [2010] to describe the terrestrial magnetopause. The main barrier to this is a lack of cusp crossings to constrain such a model. During the course of this study, ∼10 magnetopause crossings were identified within the cusp region, identified as such due to their high-latitude location and very low . Finally, Clarke et al. [2006] observed smaller-scale oscillations in the location of the boundary caused by oscillations in the magnetic field and plasma signatures that are known to occur throughout the Kronian system. Similarly, Zieger et al. [2010] found that the periodic release of plasmoids down into the magnetotail causes the magnetopause to oscillate as the resulting waves propagate through the system. For the present study these are neglected but could be added to the existing model as an extra layer of complexity on top of the internally driven variability already discussed. A local solver is initiated at each trial point in the sequence and a list of these starting points, the solutions that the solver ultimately converged to and the distance between these start and end points is complied. Several different local solvers of varying complexity were experimented with by generating a synthetic set of magnetopause crossings from a known model, preserving the distribution of the quantities measured in situ and, finally, adding Gaussian noise to them. Each solver was then tried on this synthetic data to evaluate which solver was able to most closely replicate the known model. Ultimately, the implementation of the interior point algorithm of Waltz et al. [2005] in the MATLAB Optimization Toolbox provided the most accurate results. Between each call, the maximum distance between the trial solutions that successfully converged and the solutions they ultimately converged to is used to estimate the radius of the basin of attraction for subsequent trial solutions by multiplying it by an empirically chosen scale factor. This scale factor is used to balance the compromise between efficiency and accuracy and was found by running the global solver on many different problems of varying complexity. At each iteration, the model described by parameters a i was fitted precisely through the location of each observed magnetopause crossing to within a tolerance of 10 −6 R S in order to calculate Ψ and hence D P . Initially, a simple Newton-Raphson solver was used to do this efficiently, but it was found that successful convergence for many crossings was sensitive to the choice of a i . Now, more sophisticated solvers from the MATLAB Optimization Toolbox are used. A Levenberg-Marquardt solver is used in the first instance as convergence can be achieved for the vast majority of crossings for any given set of coefficients relatively efficiently using this solver. For crossings where this is not possible, the trust region reflective solver is used, which is more computationally expensive but has yet to fail to converge for this purpose. Complete convergence is necessary because the top level solver requires a smooth objective function and gradient thereof to operate correctly, an assumption which is violated if convergence is not achieved for every single crossing fed into the solver. Appendix B: A Better "Fitness" Criterion In order for the solver to know in which direction to move in parameter space, a goodness-of-fit estimator must be calculated at each iteration. A good choice is the distance between each crossing location and the surface described by a i and D P . Previous studies used the distance between the crossing and the point where the model surface intersects the crossing radial vector (the blue and red points in Figure B1, respectively) as an approximation for the distance between the crossing and the model surface. Here a near-exact solution to the minimum crossing surface distance is found by solving a system of nonlinear equations numerically, using the fact that the shortest distance between a point and a surface is along the normal to the surface that passes through that point, where x and x 0 are the X KSM coordinates of the crossing location and the point on the surface closest to the crossing, respectively, n x is the X KSM component of the outward directed normal to the surface computed at the closest point to the crossing and F scales the normal vector to the crossing location and is equal to the distance between the crossing and the model surface if the unit normal is used. Hence, F is zero if the model surface passes directly through the crossing location and positive (negative) if the crossing lies outside (inside) the surface. Analogous equations to equation (B2) can be constructed for the other two spatial coordinates, Y KSM and Z KSM . Equation (B1) constrains the solution to lie on the surface, and equation (B2) and its counterparts ensure that the closest point to the crossing is found. An initial guess is required for (x 0 , y 0 , z 0 , F), but the ultimate solution does not depend on this guess as there is only one solution that can satisfy these equations for any given magnetopause crossing and D P . The radial approximation used in previous studies is a good first guess that can be used to minimize computation time. These equations can be solved for (x 0 , y 0 , z 0 , F) for each magnetopause crossing to an arbitrary degree of accuracy and so, effectively, represent an exact solution. The distance between each magnetopause crossing and the model surface, constructed at the dynamic pressure estimated assuming pressure balance from equation (9), was calculated along with the RMS residual. The fitting routine is then iterated until the RMS residual converged to within a tolerance of 10 −6 R S . Fitting using the radial approximation method applied in previous studies and the exact method presented here yield the same results for all but one of the model Figure B1. This figure illustrates how the distance between a magnetopause crossing and the empirical surface described by a i is calculated. The red point is the observed magnetopause crossing, the shaded region is the empirical surface constructed at D P and the planet is shown at the origin. The blue point is where the crossing radial vector (red line) intersects the surface and the green point is the closest point on the surface to the crossing, which is found iteratively. A view of the plane containing all three points of interest is displayed. The arrow indicates the normal to the surface at the closest point to the crossing and shows that it passes directly through the crossing location, meaning that the green point is, indeed, the closest point. coefficients within the estimated uncertainties. The coefficient a 3 , which chiefly controls how much the magnetopause "flares" by in equation (3), was significantly different between the two methods. This is because the approximate method breaks down as an estimate of the shortest distance between the crossing and the model surface the further into the tail the model is projected. It is the crossings in this region of space that best define the degree of magnetopause flaring so it stands to reason that a 3 would be the most affected by a change in the calculation of the distance between the crossing and the model surface. The results obtained using both methods were compared and it was found that the new method reduced the RMS residual by ∼0.6 R S , indicating a substantial increase in the model accuracy. Appendix C: Coefficient Uncertainty Estimation The most efficient method of estimating the standard error for each of the model coefficients is to approximate the coefficient covariance matrix. This can be done using the variance in the distance between an observed magnetopause crossings and the location predicted by the model, known as the residual. This distance, F, can be calculated using the procedure outlined in Appendix B. The sample variance, 2 , can then be calculated as where N is the number of observations, F j is the model-crossing distance for the jth crossing, and is the sample mean. The coefficient covariance matrix can then be calculated from the variance and the Jacobian matrix, a matrix of dimensions i × j of the first-order derivatives of F with respect to coefficient i, the coefficient covariance matrix, C, is then given by, where J and 2 are evaluated at the best fitting set of coefficients. A first-order approximation to the standard error of each coefficient can then be determined by taking the square root of the diagonal elements of C. Note, though, that this approach neglects second-order covariances between the coefficients, which are important if the off-diagonal elements are comparable to the diagonal elements of C. Figure 5 shows that this 10.1002/2015JA021290 is, indeed, the case for some of the coefficients used in this study. In particular, Figure 5 (top left) shows that the uncertainty ellipses are significantly inclined which indicates a strong correlation between coefficients a 1 and a 2 . The standard error calculated for a particular coefficient using the above procedure can be interpreted as the standard error of one coefficient assuming that the other coefficients remain fixed (the vertical or horizontal extent of the ellipse, depending on the coefficient being considered). Though, if two coefficients are strongly correlated, changing one coefficient tends to cause a change in the other coefficient. Such a correlation indicates redundancy in the model and means that the model can be simplified by expressing one coefficient as a linear law of the other coefficient. In this case, it indicates that system size is strongly linked to the compressibility of the system. In cases in which a strong correlation is identified, more robust methods can be used to estimate the uncertainties instead. For the purposes of the current discussion, the Monte Carlo Bootstrap method is used. This is a powerful technique that can be used to calculate the distribution of the coefficients with few underlying assumptions, only that the observations are a good representation of the underlying population and that the data are independent. This method involves running the fitting routine many times (400 samplings were used here), fitting the model to a different set of crossings each time. As such, it is computationally expensive. In this case, N magnetopause crossings are selected at random from the full set of N crossings, but, crucially, these crossings are selected with replacement. This means that, for a given random sample, some crossings are selected multiple times, whereas some are discarded. As a result, the best fitting coefficients are different for each random sample drawn. These coefficients are recorded and confidence intervals for each coefficient can be evaluated. These confidence intervals can then be corrected for bias and skewness [Efron, 1987]. The uncertainties estimated using both methods are comparable, and the maximum of these has been reported for each coefficient in Table 1.
13,280.4
2015-09-01T00:00:00.000
[ "Physics" ]
Period-luminosity diagram of long period variables in the Magellanic Clouds. New aspects revealed from Gaia Data Release 2 Context: The period-luminosity diagram (PLD) has proven to be a powerful tool for studying populations of pulsating red giants. Gaia Data Release 2 (DR2) provides a large data set including many long-period variables (LPVs) on which this tool can be applied. Aims: We investigate the location of LPVs from the Large and Small Magellanic Clouds in the PLD using various optical and infrared luminosity indicators from Gaia and 2MASS, respectively. We thereby distinguish between stars of different masses and surface chemistry. Methods: The data set taken from the Gaia DR2 catalogue of LPVs allows for a homogeneous study from low- to high-mass LPVs. These sources are divided into sub-populations of asymptotic giant branch (AGB) stars according to their mass and their O- or C-rich nature using the Gaia-2MASS diagram developed by our group. This diagram uses a Wesenheit index Wrp based on Wesenheit functions in the Gaia and 2MASS photometric bands. Four different luminosity indicators are used to study the period-luminosity (P-L) relations. Results: We provide the first observational evidence of a P-L relation offset for both fundamental and 1O pulsators between low- and intermediate-mass O-rich stars, in agreement with published pulsation predictions. Among the luminosity indicators explored, sequence C' is the narrowest in the P-Wrp diagram, and is thus to be preferred over the other PLDs for the determination of distances using LPVs. The majority of massive asymptotic giant branch (AGB) stars and red supergiants form a smooth extension of sequence C of low- and intermediate-mass AGB stars in the P-Wrp diagram, suggesting that they pulsate in the fundamental mode. All results are similar in the two Magellanic Clouds. Introduction Long period variables (LPVs) are easily detectable representatives of the asymptotic giant branch (AGB) phase of the evolution of low-and intermediate mass stars. The AGB phase is critical for stellar and galactic evolution (e.g. Pastorelli et al. 2019;Marigo et al. 2017;Marigo 2015;Bruzual 2007;Maraston et al. 2006). In terms of stellar evolution, high mass-loss rates are responsible for the end of the life of a star. At the same time, a large fraction of the intermediate to heavy elements are produced and returned to the interstellar medium during the AGB phase (e.g. Ventura et al. 2018;Slemer et al. 2017;Cristallo et al. 2015;Karakas & Lattanzio 2014). Therefore, this phase is of high importance for the chemical evolution of galaxies. Long period variables are characterised by large amplitude variations in the visual and pulsation periods of about 10 to 1000 days. The period-luminosity diagram (PLD) has become one of the main tools for studying LPVs and their evolution along the AGB. Wood & Sebo (1996) were the first to detect that semi-regular variables (SRVs) follow an additional P-L relation other than that of Miras. This finding was interpreted, and later confirmed, as due to pulsations in overtone modes, while Miras pulsate in the fundamental mode (e.g. Wood et al. 1999;Wood 2000). As a consequence of the lack of reliable distances to LPVs in the solar neighbourhood, studies of the PLD have relied on variables in stellar systems, most of all the Magellanic Clouds (Feast et al. 1989;Wood 2000;Ita et al. 2004a;Fraser et al. 2005;Soszynski et al. 2007), but also in globular clusters (Feast et al. 2002;Lebzelter & Wood 2005), the Galactic Bulge Glass et al. 1995) and some local group galaxies (Whitelock 2012;Whitelock et al. 2013;Menzies et al. 2015). Studies of the Magellanic Clouds finally revealed the presence of at least five parallel P-L relations, or sequences (Wood et al. 1999;Ita et al. 2004b). Distinct sequences are associated with distinct pulsation modes, with two exceptions: sequences B and C are both due to the same pulsation mode (the first overtone (1O) mode; Trabucchi et al. 2017), while sequence D hosts the so-called long secondary periods (LSPs), which are likely the result of a different kind of variability, but whose origin is still unknown (Hinkle et al. 2002;Wood & Nicholls 2009;Saio et al. 2015). Trabucchi et al. (2017) argued that when stars pulsating predominantly in the 1O mode are crossing the region between sequences B and C , the stars tend to develop a LSP with a larger amplitude than the 1O mode itself. The LSP is thus more easily detected, resulting in the apparent gap between sequences B and C . Yet, despite pulsating in the same mode, stars associated with sequence C (generally classified as SRVs) have observed properties markedly distinct from those on sequence B (the so-called OGLE 1 Small Amplitude Red Giants, OSARGs; Wray et al. 2004). The SRVs not only show larger visual amplitudes, but they also have higher infrared excess and estimated mass-loss rates (McDonald & Trabucchi 2019). In this scenario, Mira variables are found to lie on sequence C, associated with pulsation in the fundamental mode and are experiencing the latest stages of the AGB. The past 20 years of research in this area revealed the dominant role of large surveys for the detection and study of P-L relations. Among the ground-based studies, the monitoring of the Magellanic Clouds and the galactic bulge within the OGLE project (Soszyński et al. 2009(Soszyński et al. , 2011 stands out because of the long time coverage of light changes, the high time resolution and the sky area it comprises. With the advent of Gaia Data Release 2 (DR2), a database became available which provides a deep all-sky monitoring of LPVs in one wide (G) and two semiwide (G BP and G RP ) optical bands covering bright and faint stars from few mags down to G = 20.7 mag, and with distances and proper motions as additional information. The 22 months of data published in DR2 already allow for a study of periods and pulsational behaviour of LPVs . Recently, our group showed ) that a combination of Wesenheit functions using Gaia and nearinfrared wavelength bands allows us to efficiently separate groups of stars on the AGB according to their mass and surface chemistry (C/O-rich). To simplify the description, in the following, a diagram plotting 2MASS K s against the above-mentioned difference of Wesenheit indices (see Sect. 2.1 for the definitions of W BP,RP and W J,K ) is called a Gaia-2MASS diagram. This diagram allows the possibility to add a mass and dredge-up indicator to the PLD. A further exploration of the applicability of this diagram to various stellar systems has been presented in Mowlavi et al. (2019). Pulsation models of LPVs predict a dependency of the P-L relations on mass ) and surface chemistry (e.g. Lebzelter & Wood 2007). The goal of this paper is to extend our understanding of variability on the AGB, particularly for the more massive and thus more luminous part of this evolutionary phase, and in relation to the distinction between C-and O-rich atmospheric chemistry. We investigate PLDs using Gaia photometry as brightness indicators to add these relations to the suite of tools for the exploration of LPVs. At the same time, we aim to explore further the strengths of the Gaia-2MASS diagram for the study of evolved stellar populations. Selection and characterization of sample stars The starting point for our sample selection was the Gaia DR2 database of LPVs ) because for these objects the availability of G BP and G RP photometry allows for a distinction according to mass using the Gaia-2MASS diagram and because Gaia also covers the brightest objects in the Magellanic Clouds, which OGLE does not. We selected stars in the LMC and SMC similar to the selection process in Lebzelter et al. (2018) and Mowlavi et al. (2019) according to their location in 1 Optical Gravitational Lensing Experiment the sky, their proper motion, and their parallax. While we present results for both clouds in this paper, the description and interpretation focusses on the LMC owing to the much lower number of sample stars in the SMC. The sample inherits the limitations of the first Gaia catalogue of LPVs, which focussed on candidates with variability amplitudes larger than 0.2 mag in G, and prioritised low contamination over completeness, resulting in a catalogue of LPV candidates with a completeness of about 45% in the Magellanic Clouds. In addition, the study of the periods in this paper is restricted to the range between 70 and 1000 days, the lower limit resulting from restrictions due to the Gaia scanning law introducing spurious frequencies in the low end, and the upper limit resulting from the 22-month time span of Gaia DR2 (we refer to Mowlavi et al. 2018, for a description of the properties of the catalogue). While the latter condition would in principle limit the period range to a maximum of ∼500 d, we still extend the study up to 1000 d to include LPVs that show variations on that timescale, particularly LSPs forming sequence D in the PLD, keeping in mind that the uncertainties on the periods longer than 500 d can be large. Finally, we note that the DR2 catalogue of LPVs includes only the variability period of the largest amplitude. Multi-periodicity, while being very common among LPVs, is awaiting additional data in future data releases to be properly characterised. These limitations, in combination with a higher limiting amplitude, substantially affect the appearance of the PLD, especially when compared with OGLE observations. The lower period limit, in particular, effectively leads to the loss of the lowbrightness parts of the P-L sequences C and C (cf. Sect. 3). In addition, stars with a 1O mode period transitioning between sequences B and C are likely to develop a more easily detectable LSP on sequence D. When, as in the present case, a single observed period per star is available, this contributes to the apparent depletion of sequence C . We then attributed classifiers for mass (low, intermediate, and high) and chemical type (O-rich or C-rich) to each of our LPVs using the Gaia-2MASS diagram and the definitions for the LMC set up in Lebzelter et al. (2018). This is illustrated in the left panel of Fig. 1, in which the magnitude in the 2MASS K s band is shown against the difference between the Wesenheit indices constructed with Gaia and 2MASS photometry, respectively, i.e. Lebzelter et al. (2018) identified four branches in the Gaia-2MASS diagram, labelled with letters from (a) to (d). In this work, we use the same labels as in Lebzelter et al. (2018) to identify the four branches in the Gaia-2MASS diagram. In addition, we use the label (a-f) to identify the portion of branch (a) populated by RGBs and faint AGBs (i.e. stars below the tip of the RGB), and label (b-x) for the right-most side of branch (b), containing stars identified as extremely dusty stars. That group predominantly contains C-stars, which are efficiently producing high opacity dust grains (Höfner & Olofsson 2018), but may also include highly reddened OH-IR stars. Using the list of OH-IR stars of the LMC recently presented by Goldman et al. (2017) we find nine OH-IR stars among the oxygen rich stars in our diagram, and six objects in the part denoted as (b-x). All but one of the latter OH-IR stars are among the reddest objects in (b-x). We applied one modification to the definitions of the various groups in this diagram compared to Lebzelter et al. (2018), namely for the boundaries for groups (c) (intermediate-mass) Left panel: Ratio of the Gaia DR2 period to the primary period from the OGLE-III database for the LMC (blue) and SMC (red), respectively, as well as the corresponding histogram. Right panel: Same, but using the OGLE-III period whose value is closest to the Gaia DR2 period. Above P Gaia = 500 days, Gaia periods become uncertain owing to the limited time span of the Gaia DR2 data. and (d) (high-mass), in which case we replaced the original definitions with, respectively, K s > 8.74 + (W BP,RP − W J,K s ) and K s ≤ 10.5 + (W BP,RP − W J,K s ) ; (3) This new boundary between groups (c) and (d) accounts better for the distribution of LPVs in the SMC (right panel of Fig. 1) while keeping a good solution for the LMC. A possible dependency of the boundaries on metallicity still needs to be explored. The Gaia-2MASS diagram of the SMC shown in Fig. 1 accounts for the difference in distance with the LMC assuming a distance modulus 18.49±0.09 mag for the LMC and 18.96±0.02 mag for the SMC (de Grijs et al. 2017). It is clear that the chosen values of the distance moduli affect the classification of stars that are located nearby boundaries between groups, thereby introducing a source of error. This uncertainty overlaps with instrumental uncertainties, which are in the millimagnitude range for the Gaia photometry (Evans et al. 2018) and about 0.03 mag for the 2MASS data. However, because of the limited sampling of the light curves, deviation from the mean brightness of a star, owing to its variability and cycle-to-cycle fluctuations, certainly dominates the uncertainty in the two quantities plotted in Fig. 1. Naturally, this leads to misclassified objects. Additional sources of error stem from the different extinction between the two systems (here neglected) as well as the different magnitude of the RGB tip, whose value was employed by Lebzelter et al. (2018) to discriminate between low-mass O-rich AGBs and the group consisting of RGBs and faint AGBs. We do not consider this a problem in the present paper since we do not aim to derive statistical quantities from our analysis here. The bolometric corrections (BCs) for each object in our sample are solely based on Gaia data (G, G BP , G RP ). These corrections differ from those used in DR2 and are described in detail in Appendix A. These BCs are used to compute the bolometric magnitudes m bol from the G-band photometry. Comparing OGLE and Gaia periods As mentioned above, the OGLE survey provides one of the best catalogues of LPVs in the Magellanic Clouds in terms of completeness and accuracy and listing multiple periods and amplitudes. Accordingly, OGLE data have been widely used to study PLDs of LPVs, in particular for the study of Trabucchi et al. (2017). For our study of the pulsational properties of LPVs with Gaia data, it seemed self-evident to use OGLE data as a reference to judge the accuracy, in particular, of the Gaia periods and to compare any conclusions deduced from our study to previous works on the PLD. A first test of that kind was part of the paper by Mowlavi et al. (2018) indicating a reasonable reproduction of OGLE periods and amplitudes with the help of Gaia data. We note that there are two advantages of using Gaia data for our study. First, it is possible to access the stellar parameters mass and chemistry (using the Gaia-2MASS diagram) and, second, the data include bright AGB stars in the Magellanic Clouds, which have been cut off due to the upper brightness limit of the OGLE survey. The disadvantages of Gaia data are the shorter lengths of the time series, lower sampling density of the light curves, and -for DR2 data -the availability of only one single period. We compare periods between the two surveys in Fig. 2. First, we compare Gaia periods with the primary periods (i.e. those with the largest amplitude in a given star) detected by OGLE 2 . The ratio between the two is plotted against the Gaia period in the left panel of Fig. 2, which also shows the histogram of these ratios. We find a high rate of agreement; about 70% of the Gaia DR2 periods in the range 70-500 days, differing by less than 2 We cross-matched the OGLE and Gaia DR2 catalogues to 1 arcsec of radius and found a match for 5279 of the total 12811 Magellanic Cloud LPVs from the Gaia DR2 data set (the majority of the remaining LPVs are out of the OGLE field of view). 25% from OGLE primary periods. At longer periods, the limited length of the Gaia time series introduces a larger period uncertainty. For a significant fraction of stars, there are evident discrepancies between the two surveys, a direct consequence of multi-periodicity and of the lower period limit of the Gaia catalogue. Indeed, a multi-periodic star with a primary period shorter than 60 days is present in the OGLE catalogue, but not in Gaia, which lists one of the longer periods of the star instead. This is the case for objects in the upper right corner in the left panel of Fig. 2, for which the Gaia pipeline detected a LSP in place of, for example the 1O mode period found by OGLE. Similarly, the 1O mode period may be listed in the present catalogue in place of the fundamental mode period or vice versa. Figure 2 shows candidates with this difference to the OGLE results in the two groups located directly above and below P Gaia /P OGLE =1. A large fraction of objects disappear from these two groups when switching to P Gaia /P closest , i.e. when using the OGLE period nearest to the Gaia period (right panel of Fig. 2). In the remaining cases the Gaia period search algorithm seems to have picked the first harmonic of the OGLE period, likely because of the sampling of the light curve by Gaia which is very different from OGLE. The bias introduced by multi-periodicity can thus be partially accounted for by comparing Gaia periods with the closest period listed for the same star in the OGLE catalogue, rather than the primary OGLE period. Our comparison proves that within the investigated period range, Gaia periods are in very good agreement with the OGLE periods. Therefore, the use of Gaia light curve data in this study is a valid approach to investigate the PLDs of LPVs. Since we restrict our conclusions on the variability found on sequence D to qualitative statements, accuracy limitations of Gaia data near the long period end play less of a role. The same is true for the bias associated with the lack of multi-periodicity information in the Gaia catalogue. The high-mass stars and supergiants [group (d)] require a further discussion to verify the quality of the periods for this group. The red supergiants, as the most luminous objects in the PLD, naturally show, on average, the longest periods and populate a period range where the length of the Gaia DR2 data set is of the same order as the period. We show in Fig. 2 that periods above 500 days derived from the Gaia DR2 data exhibit an increasing uncertainty. However, this finding is based on a general comparison of Gaia and OGLE data. The OGLE observations are saturated for the supergiants in the Magellanic Clouds and therefore a direct comparison with our group (d) stars is not possible except for those with the lowest brightnesses. Hoping to find an alternative sample for comparison, we extracted light curve data for our supergiants from the Massive Compact Halo Objects Project (MACHO) database. However, it turned out that those measurements seem to be affected by saturation effects as well and do not provide conclusive results. Owing to the lack of usable comparison data we decided to look at the individual light curves to evaluate the quality of the derived periods. This exercise confirmed the variable nature of our sample supergiants. Figure 3 shows four example Gaia light curves from our group (d) stars. The top row shows two typical supergiant light curves that are well sampled by the Gaia time series. In the bottom row, the left example shows a case that has very large variability amplitude and a light curve reminiscent of a Mira. We therefore think that this star is a massive AGB star close to the end of its evolution, which is also supported by the presence of a bump in the light curve before maximum light (McSaveney et al. 2007). The very red colour of this object is remarkable considering that in K s the star belongs to the most luminous group, while in G it is of average brightness. The fourth example is a supergiant with a period that is too long to be fully covered by the time span included in DR2. Such objects, although they come with a period in the DR2 catalogue, were excluded from our analysis. We also rejected stars that had badly sampled or irregular light curves. Excluding those objects we ended up with a total sample of 78 group (d) stars in the LMC and 17 in the SMC, which is in both cases about 45% of all candidates in this group. Results Relations between period and luminosity are a common tool to study pulsational properties, such as the dominant pulsation mode, of a distinct group of variable stars. In observational studies of LPVs, the luminosity, a quantity rarely available for large samples of stars, is typically replaced by the absolute K-band magnitude or the Wesenheit function W J,K s . In our study, we follow this approach using the P-K s and P-W J,K s diagrams, and further investigate the PLDs using two luminosity indicators derived from Gaia, namely W BP,RP and m bol . The study was performed for both the LMC and SMC. Assuming that all stars within one of the Clouds have the same distance, we did not apply distance corrections to the used magnitudes. The average line-of-sight depths of the LMC and SMC were estimated by Subramanian & Subramaniam (2009) to be about 4 kpc and 4.5kpc, respectively, i.e. less than 10% of their distance to the Sun. These depths would translate into magnitude depths of 0.2 mag. However, for the SMC significantly larger depths for some directions occur (Ripepi et al. 2017;Sun et al. 2018). The line-of-sight depths are expected to add some scatter around the P-L relations, in particular for the SMC. No reddening corrections were applied either, in accordance with the very low impact of interstellar extinction in the K s band for objects outside the disc. Besides, the small differential interstellar extinction across each Cloud (e.g. Milone et al. 2009) leads to small dispersion of the m bol values. We start this section with a qualitative description in Sect. 3.1 of the PLDs using the four luminosity indicators K s , W J,K s , W BP,RP , and m bol . The description is then further detailed for intermediate-and high-mass O-rich stars in Sects. 3.2 and 3.3, respectively. The case of C-stars is tackled in Sect. 3.4. Period-luminosity diagrams for various luminosity indicators The P-K s diagram of our complete samples for the two Clouds are shown in Fig. 4. The LPVs for which a period has been measured from Gaia DR2 data populate the bright ends of sequences C , C, and D, as visible in the figure. We point out that the Gaia data set extends the OGLE-III observations (Soszyński et al. 2009(Soszyński et al. , 2011 towards higher brightness by almost 2 mag in K s . Our study thus fills the gap between low-mass giants and supergiants left in previous studies (see figure 7 of Kiss et al. 2006). The consistency with the best-fit relations derived by Soszynski et al. (2007), also shown in Fig. 4, further testifies the good agreement with OGLE data discussed above. Sequence D as observed by Gaia is more heavily scattered (cf. Soszyński et al. 2009, their figure 1) and systematically shifted towards shorter periods, a consequence of the degraded precision in the determination of periods longer than ∼ 500 days from DR2 data. O-rich stars with masses 1.5 M [branch (a) in Fig. 1] are observed predominantly on sequence D, meaning that their primary variability is associated with a LSP. This is especially true for stars in the (a-f) group in Fig. 1, classified as RGBs or faint AGBs. These stars usually also pulsate in the 1O mode, but with a lower amplitude than the LSP and a relatively short period (on the left side of sequence C , e.g. Trabucchi et al. 2017;Wood 2015), so that it is less likely characterised in DR2. The PLDs defined by the other luminosity indicators, namely W J,K s , W BP,RP , and m bol , are shown in Fig. 5 for O-rich stars, and in Fig. 6 for C-rich stars. In light of the all-sky coverage of the Gaia survey, the use of W BP,RP and m bol , which enables a study of PLDs solely based on data from this spacecraft, provides an important step in the full exploration of the Gaia data set on LPVs. Figures 5 and 6 are organised in the same way, giving P-K s , P-W J,K s , P-W BP,RP , and P-m bol from top to bottom, respectively. The left column in each figure is used for LMC data, the right for SMC data. The colour-coding of the various groups is explained in the top left corner of each panel. All four luminosity indicators in Figs. 5 and 6 lead to a set of clear relations between the indicator and the period. At the same time we immediately note significant differences between the four kinds of diagrams and between the relations for the various groups of LPVs identified in Fig. 1. For the O-rich stars (C-rich stars are discussed in Sect. 3.4), we note the following points when comparing the diagrams us-ing the different luminosity indicators (see Fig. 5): (1)The P-K s -and the P-W J,K s diagrams show a high similarity as has been outlined in the literature before (e.g. Soszyński & Wood 2013). (2) In the P-W BP,RP diagram (third row in Fig. 5), sequence C' is narrower and thus seemingly better defined compared to the infrared indicators. We note that opposite to W J,K s the quantity W BP,RP consists of photometry averaged over a light cycle. This may in part be relevant for the smaller scatter. However, only 5% of the stars on C' have large amplitudes in G (>1 mag), Fig. 6. Period-luminosity diagrams of Gaia LPV candidates in the LMC (left) and SMC (right), including only stars classified as C-rich (with extreme C-rich stars noted x-C-AGB in the panels), and using different luminosity indicators. From top to bottom, K s band and W J,K s index from 2MASS, W BP,RP index from Gaia, and the bolometric magnitude computed as described in Appendix A. Lines in the top four panels are best fit to the P-L relations of C-rich LPVs from Soszynski et al. (2007). and they are expected to have even smaller amplitudes in K s . Therefore, the effect of averaging is expected to be comparably small. This also suggests that differences between the widths of the sequences in various luminosity indicators are not primarily caused by using single or averaged multi-epoch photometry. (3) The structures seen in the P-m bol diagram (bottom row of Fig. 5) coincide more with the infrared luminosity indicators than with W BP,RP . While the diagram still facilitates a distinction between various groups, relations, in particular for the low-mass stars, are flatter. A difference in inclination between the relations for low-and intermediate-mass stars is likely present. The supergiants and the high-mass AGB stars do not show any distinct sequences in this diagram. The findings described in this section are applicable to both the LMC and SMC sample, although the SMC sample is less clear because of the lower number of stars. Intermediate-mass AGB stars. A large fraction of the bright AGB stars in the Gaia sample, which Lebzelter et al. (2018) , are not present in the OGLE-III database, and are thus of particular interest. Stars on branch (c) clearly follow three P-L relations, which is consistent with sequences C , C, and D. But these are systematically shifted towards shorter periods compared to low-mass O-rich stars in the infrared PLDs (first and second rows in Fig. 5). This is a consequence of their higher masses and agrees with the theoretical prediction of Wood (2015). This offset is not visible when the W BP,RP index is employed (Fig. 5, third row), and its absence accounts for a much narrower sequence C' compared to K s and W J,K s . In the P-m bol diagram, the offset between the relations for lowand intermediate-mass LPVs is clearly detectable only for sequence C'. Sequence C is very wide for the intermediate-mass stars and the location of the relation is much less defined than for W J,K s , for example. However, a smooth transition between low-and intermediate-mass objects is suggested. We explore the cause of this difference between the various luminosity indicators in Sect. 4.1. High-mass AGB and RSG stars. The LPVs that belong to branch (d) have even higher masses than those on branch (c). Therefore, we would expect to find an offset for these objects as well. However, their distribution in the PLD suffers from a larger scatter, and it is less straightforward to identify the P-L relation(s) they belong to. This is further complicated by the fundamentally different behaviour they exhibit depending on whether visual or near-infrared bands are used to track their luminosity. Judging by the PLDs in the upper panels of Fig. 5, where K s and W J,K s are used, the bulk of stars from branch (d) appear to lie on a prolongation of sequence C . In contrast, when Gaia photometry is used in the PLD (i.e. the W BP,RP index, Fig. 5, third row), most of these bright stars seem to follow the P-L sequence C. The P-m bol diagram (fourth row in Fig. 5) does not show pronounced indications for a connection of the high-mass LPVs and red supergiants to any of the relations found from low-and intermediate-mass stars. The location of these stars in this diagram, however, is found in excellent agreement with the same kind of diagram presented by ; see their figures 6 and 7. The diagram of included supergiants not only from the Magellanic Clouds but also from the Galactic field. There are no high-mass O-rich LPVs that show a variation connected to sequence D. The existence of such LSPs had been reported by several authors (e.g. Kiss et al. 2006). However, we determined only a single period and the corresponding sequence D period would likely exceed our upper period cut-off. C-stars The P-L relations of C-rich stars [groups (b) and (b-x)] are shown in Fig. 6. For these objects, the differences in the PLD arising from using distinct luminosity indicators become even more striking. When using W BP,RP (third row of Fig. 6), the highly reddened C-stars are shifted away from the bluer C-stars and the M-stars (Fig. 4) towards lower brightness. This indicates that the Gaia Wesenheit function W BP,RP does not compensate sufficiently the reddening of large amounts of circumstellar dust affecting G RP . We do not observe such a difference in W J,K s (upper panels of Fig. 6), indicating that G BP −G RP gets saturated for large extinction values opposite to J −K, where the thick circum-stellar shell starts to emit in the infrared adding to the brightness in K s and W J,K s . This agrees with results from the preliminary analysis of the Gaia Wesenheit function presented in Lebzelter et al. (2018). Since the BC used to compute m bol also depends on G BP − G RP , we observe the same effect as for W BP,RP in the P-m bol diagram (bottom panels in Fig. 6). Aside from the offset of the highly reddened C-stars, the P-W BP,RP and P-m bol diagrams show structures very similar to the P-K s and P-W J,K s diagrams. Like the O-rich LPVs, the C-rich LPVs [branch (b)] follow all three P-L relations, while the extreme C-rich AGBs [group (b-x)] are likely to follow only sequence C, as is most evident when using W J,K s in the PLD (Fig. 6, second row). The few stars of this group that are found on sequences C or D are very close to the separation line between C-rich and extreme C-rich stars in Fig. 1. Therefore, a misclassification cannot be excluded. Those extreme C-stars below sequence C probably have very thick shells (i.e. high mass-loss rates) and emit mostly in the far-infrared so K s is indeed faint. Again, a qualitatively similar behaviour is seen in both the LMC and SMC. It is noteworthy that C-stars forming the P-L sequence with the longest periods, usually dubbed sequence D, is clearly offset to the location of sequence D defined by carbon stars in Soszynski et al. (2007), shown by the dotted lines in Fig. 6. We found a smaller shift in the same direction when studying the O-rich stars (Fig. 5), which is likely caused by the limited time span of the Gaia light curves and the upper period limit resulting from it. This effect is more expressed for C-stars than for O-rich objects, likely because of their on average longer periods compared to M-type stars. Having a period closer to our period cut-off, there is a higher risk that our period search fails to derive the correct value. Period-luminosity relations at different masses As illustrated in the top two rows of Fig. 5 and as pointed out in Sect. 3.2, there is a very obvious shift between the P-L relations of low-and intermediate-mass O-rich stars; the latter show a shorter period at a given luminosity. This shift has previously been noted for sequence C (the Miras) by Feast et al. (1989) and Hughes & Wood (1990). As briefly mentioned above, such a shift is expected to occur because a higher mass of the pulsating red giant leads to a shorter period at a given luminosity. Whitelock et al. (2003) noted that at least some of the stars shifted to a shorter period from sequence C had spectral signatures of hot-bottom burning, consistent with their intermediatemass assignment. This is the first time the shift in period with mass has been observed for LPVs pulsating in the 1O. Because of the conversion of stars in the mass range between groups (a) and (c) into carbon stars, we do not see a smooth transition but rather a break up into two sequences. Interestingly, the shift disappears in the P-W BP,RP diagram. To understand this, we have to keep in mind that a star of higher mass also shows a higher effective temperature, which again may affect fluxes in the various photometric bands and accordingly also colours. We explored the effect of temperature on the various filters with the help of hydrostatic models from Aringer et al. (2016). Fig. 7 presents model spectra of O-rich AGB stars of various temperatures, and compares them with the responses of the two Gaia and the two near-infrared passbands used in our analysis. Figure 8 shows the runs of K s , J − K s , W J,K s , G BP -G RP , and W BP,RP with temperature; K s and J − K s are only weakly de-pendent on temperature. As a consequence, W J,K s behaves very similarly to K s , and any systematic temperature difference between low-and intermediate-mass stars does not become visible in the corresponding PLDs. In the G BP filter, the absorption bands of TiO make a huge impact on the flux, which therefore is highly dependent on the temperature of the star. Since the G RP flux is less affected by TiO and VO absorption than the G BP flux, the colour constructed from these two filters shows a high sensitivity on temperature, in particular down to about 3000 K. To compute W BP,RP , the temperature-sensitive colour G BP -G RP is subtracted from the G RP , thereby compensating for the flux depression in G RP . However, we see in Fig. 8 that while G RP and G BP -G RP are decreasing in value with temperature, the combined quantity W BP,RP is increasing in value towards higher temperatures because the slope of the temperature dependency of 1.3(G BP -G RP ) is steeper than for G RP . The factor 1.3 used in the construction of the Wesenheit function obviously leads to an overcompensation of the temperature dependencies of the individual Gaia filters. Both intermediate-mass and low-mass LPVs are made brighter in W BP,RP , but this occurs to a smaller extent for the former owing to their higher temperature. The two effects, the offset due to mass (i.e. shorter period due to higher mass at given luminosity) and the effect of temperature on W BP,RP , equal out, and thus sequences C' and C become single sequences more narrow than their near-infrared counterparts. When interpreting the P-m bol diagram in this context, we have to be careful since the BC is a function of G BP -G RP . Therefore, our m bol values are expected to exhibit some similarity with the W BP,RP values, although the factors in the colour terms involved are very different. As a consequence, the P-m bol sequences of low-and intermediate-mass stars do not align as for P-W BP,RP . After detecting the mass dependency of the P-L relation for O-rich stars in our observations, we decided to investigate the Crich stars for indications of a mass dependency as well. With the onset of efficient dust production, circumstellar absorption systematically depresses brightness in all optical and near-infrared filter bands. Therefore, such an investigation has to be limited to C-stars with low amounts of dust around them, i.e. group (b) stars in the Gaia-2MASS diagram. There is no doubt that with this approach some of the most massive C-stars are excluded. Within group (b) in our diagram, a separation of the stars according to mass is not obvious. Using the brightest C-stars in K s gives a sample that is scattered over the bright part of the PLDs in Fig. 6. However, selecting the brightest C-stars in the G band (bottom panel of Fig. 9) gives a sample of stars that form a sequence in the P-K s diagram slightly offset to the bulk of carbon stars on sequence C' (Fig. 10). This group strongly recalls the behaviour of the intermediate-mass stars in the O-rich case. For C-stars, the luminosity indicator W J,K s gives the most narrow sequences. We suspect, in analogy to the O-rich stars, that the Cstars delineated in Fig. 10 are more massive than the majority of the group (b) C-stars in our sample, and that W J,K s , like W BP,RP in the O-rich case, compensates for the temperature and mass dispersion leading to single, narrow sequences C' and C. Comparison with evolutionary models for low-and intermediate-mass stars We compare the observed data of LPVs with the AGB evolutionary tracks computed with the COLIBRI code (Marigo et al. 2013) and presented in Pastorelli et al. (2019). The models used for the present work cover the mass range between 1 and 5 M for two choices of the initial metallicity, Z=0.006 and Z=0.002. Along each AGB track periods and amplitude growth rates are calculated using the pulsation models of Trabucchi et al. (2019). The results for 1O mode models are shown in Figs. 11 and 12. For the SMC, the selected tracks refer to Z = 0.002, thus slightly underestimating the average metallicity of this galaxy. This has to be kept in mind in the comparison of models and observations. The colour coding represents the current mass of the star, hence there are colour changes along the tracks. Current pulsation models of LPVs suffer from a known tendency to overestimate the period of the fundamental mode, especially at large luminosities (e.g. Trabucchi et al. 2017), hence we preferred to limit the comparison to 1O mode periods, for which we find a good agreement with observations. Features identified in the four kinds of PLDs are well reproduced by our set of models. The 'mass spread' across individual P-L sequences is clearly visible in the plots of period versus K s and W J,K s . When periods are shown against W BP,RP , this effect is suppressed, which is clearly visible for LMC metallicities (Z 0.006) and is less obvious in the case of the SMC (Z 0.002). To understand the difference between the two Magellanic Clouds we need to consider that the lower metallicity leads to weaker molecular bands at a given temperature, which again reduces the sensitivity of G BP -G RP on temperature. In contrast to the more metal-rich case of the LMC, where the colour term in the Wesenheit function compensates for temperature and mass shift, we see -in the observations and models -a slight offset between the P-W BP,RP -relations of low-and intermediatemass stars, respectively. Overall, the model results agree well with our observational findings, supporting the discussion presented in Sect. 4.1. For the C-rich LPVs, Fig. 12, the calculated tracks for 1O pulsation show the same behaviour as the observations when going from a P-K s diagram (first row) to a P-W J,K s diagram (second row), namely a narrowing of sequence C'. We emphasise that in this diagram the current masses of the model stars are shown, corresponding to main sequence masses between 1.5 and 2.6 M . We decided to show the current mass since it is relevant for the pulsation properties of the star. The flattening of the sequences seen in the P-m bol diagram is reproduced by the models as well. For completeness, results for fundamental mode pulsation of O-rich models are shown in Fig. 13 for the two luminosity indicators W J,K s and W BP,RP . Keeping in mind the limitations of the pulsation models for predicting fundamental mode periods, an overall agreement with the observations is present. Periods become visibly too long at bright magnitudes. The observed narrowing of sequence C is well reproduced by stellar evolution and pulsation models. Figures 14, 15, and 16 show a variant of the period-amplitude diagram, where the period on the horizontal axis is replaced by the quantity log(P) − (W JK − 12)/∆ P,W JK (Wood 2015), where ∆ P,W JK =-4.444 is assumed to be a representative value for the average slope of the sequences in the log(P)-W JK PLD. As shown by Trabucchi et al. (2017), this is a better tool than the usual PLD to discriminate between pulsation in different modes, particularly between fundamental mode pulsation (sequence C) and LSPs (sequence D). We point out that this study is limited to the period range 70 to 1000 d. The three distributions visible in Fig. 14 are periods associated with the 1O mode, the fundamental mode, and LSP variability (Trabucchi et al. 2017). In both the LMC and SMC, the majority of stars classified as RGBs or faint AGBs show variability in a LSP. Less than 2 percent of LMC stars in this group show Fig. 7). variability in the fundamental mode, and only two show variability in the 1O mode, while in the SMC only two stars of this group show fundamental or 1O variability compared to 91 stars that exhibit a LSP. We note that such small numbers are highly sensitive to how the boundaries are defined in the Gaia-2MASS diagram and to the chosen values of the distance moduli and the K s -band luminosities of the RGB tip in the two systems. Lowand intermediate-mass O-rich AGBs have periods in all three groups. In contrast, the periods of RSGs and massive AGB stars systematically populate the region associated with 1O mode pulsation (except for a few large amplitude objects that pulsate in the fundamental mode) and exhibit no LSPs (Fig. 15). We note, however, that the identification of pulsation modes in this di-agram by Trabucchi et al. (2017) is based on AGB pulsation models, and may not be valid for RSG variables (see Sect. 4.3). Finally, C-rich stars (Fig. 16) appear in all three groups, but extreme C-rich stars are essentially limited to pulsation in the fundamental mode, which is consistent with the fact that these are evolved objects with high mass-loss rates (Vijh et al. 2009). Pulsation mode of high-mass LPVs As noted in Sect. 3.3 these objects appear connected to different relations depending on the luminosity indicator used (Fig. 5). Taking the result from the P-W BP,RP diagram at face value would indicate that the bulk of the supergiants and massive AGBs are Fig. 1, but massive C-star candidates are highlighted as black crosses. Bottom panel: Selection of massive C-star candidates in a modified Gaia-2MASS diagram using the Gaia G band on the vertical axis. pulsating in fundamental mode while the P-K s and the P-W J,K s diagrams favour a relation to the first overtone pulsation on sequence C'. In all cases a small fraction of sequence (d) objects in the LMC, of the order of 10%, seems to be shifted away from the bulk of the stars and possibly pulsate in a different pulsation mode. In the SMC the number of objects in this group is very small and it is difficult to reach conclusions. To investigate whether there are two groups of LPVs among the high-mass stars that can be distinguished by their pulsation mode, in Fig. 18 we compare the period with the amplitudes (taken from the magnitude ranges published in Gaia DR2) in the G, G BP , and G RP bands for our complete sample. In this diagram, massive AGB stars and RSGs are denoted according to the P-W BP,RP sequences to which they belong. A difference in the pulsation amplitude may be a hint that different modes are excited. For instance, there is an obvious gap in the amplitude distribution at approximately 2 mag for P>300 d. The large amplitude stars form a band starting at a period of about 150 d, and all these objects are located on sequence C and are fundamental mode pulsators. It can be seen that all but three high-mass stars show very similar amplitudes of less than 2 mag. This result is independent of the photometric band used. We conclude that the pulsation amplitudes do not support the presence of two groups with different pulsation modes among the high-mass LPVs. The three objects with a significantly larger variability amplitude are all found close to sequence C in all luminosity indicators tested and their light curves show bumps in the rising parts of their light curves which are typical for massive Mira-like pulsators (see Fig. 3). We therefore conclude that these are massive AGB stars pulsating in fundamental mode. In the P-W BP,RP diagram they are not distinguishable from the bulk of the stars on that sequence. Chatys et al. (2019) published a large collection of red supergiants from the galactic field and the LMC with their periods and visual amplitudes. Like in our study, these authors found light amplitudes to be lower than 1 mag for the vast majority of objects. Only one star in their list, UZ CMa, which has been classified as a RV Tau variable, shows an amplitude of 2 mag. Even though the amplitudes reported in Chatys et al. (2019) are in photometric bands different from those of Gaia, we find our results to be in good agreement with that study. While amplitudes alone do not allow us to draw a conclusion on the dominant pulsation mode, models suggest that higher mass tends to suppress overtone instability (e.g. Trabucchi et al. 2019, their figure 22 and 23). A study of the velocity amplitude of typical representatives from this object class would add an important observational constrain to identify their pulsation modes. If we assume the fundamental mode to be the dominant mode of pulsation in the red supergiants, models from give typical masses between 15 and 30 M . In addition to the paper by Chatys et al. (2019) mentioned above, Ren et al. (2019) recently presented PLDs for supergiants in M31 and M33. A comparison with pulsation models allowed the latter authors to attribute the supergiants to fundamental or first overtone pulsation. In Fig. 19 we plot the period and K s values for supergiants derived by them together with the Gaia DR2 results for the LMC. It can be seen that the Gaia group (d) stars fall on the relations defined by the M31 and M33 supergiants. A Fig. 12. Same as Fig. 11, but comparing observed stars on branch (b) with the portions of evolutionary tracks having C/O > 1. The represented tracks roughly correspond to the minimum and maximum initial masses that produce carbon stars pulsating in the 1O mode at the chosen metallicities. For clarity, tracks with an initial mass that is intermediate between these two values have been omitted. similar agreement was found with the PLDs presented by Chatys et al. (2019) for galactic and LMC red supergiants. From this plot, it also becomes clear that the LSPs of red supergiants occur on timescales of a few thousand days and are therefore out of reach for Gaia DR2. Using the conclusions of Ren et al. (2019) on the pulsation mode of these objects by comparing their location in the P-K s plane with pulsation models, we find the bulk of our group (d) stars to be fundamental mode pulsators. If we want to use the PLD to identify the pulsation mode of a star and if we attribute fundamental mode pulsation to sequence C, we must use the P-W BP,RP diagram because it is the only one with the massive stars falling on that sequence. For other indicators of luminosity such as K s or W J,K s , the shift likely resulting from the mass and temperature difference rela- tive to low-mass stars is of a size that puts these stars onto sequence C'. A parallel sequence shifted towards shorter periods, which is seen in our study (Fig. 19), in the paper by Ren et al. (2019) and in the work by Chatys et al. (2019), is likely formed by first overtone pulsators among the massive AGB stars and the supergiants. Conclusions Gaia observations of the LMC and SMC have allowed the study of the variability of red supergiants and massive AGB stars in contrast to the extensive previous ground-based observations of OGLE and MACHO in which these stars saturated the detectors used. The Gaia-2MASS diagram, constructed for the LMC, provides significant advantages in comparing the AGB and red supergiant populations of different stellar systems. First of all, the quantity on the x-axis is independent of both distance and (global) reddening, and thus insensitive to the uncertainties in both quantities. A comparison of the two plots in Fig. 1 suggests that the Gaia-2MASS diagram is applicable to other galaxies as well. Our diagram allows us to distinguish between stars of different mass and chemistry. This paper focussed on the application of this distinction to the P-L relations of LPVs. The appearance of the P-L relations depends on the spectral range of observations, so that different kinds of information can be derived from the PLD depending on the photometric pass-bands employed in its construction (e.g. Soszynski et al. 2007, their figure 1). In this paper, we considered four such PLDs employing the K s -band and W J,K s index from 2MASS, the W BP,RP index from Gaia DR2 photometry, and the bolometric magnitude m bol , respectively. We could, for the first time, clearly show the existence of an offset between low-and intermediate-mass, oxygen-rich stars for both fundamental and 1O pulsators and thus confirm predictions from pulsation theory. The offset is, however, not visible when using the Wesenheit index W BP,RP . We explain this as the result of a temperature sensitivity inherent in the chosen combination of the Gaia BP and RP filters. The temperature effect is different for intermediate-mass and low-mass stars, and compensates the offset due to the mass effect. As a consequence, sequence C' in the P-W BP,RP diagram is found to be the most narrow as compared to the other luminosity indicators investigated in this work. Therefore, this relation may become the preferred approach to determine distances from the variability behaviour of AGB stars. In addition, we possibly found a way to identify the most massive C-rich stars in our sample, using an analogy to the O-rich case. However, this result requires validation by further tests and observational proof. A detailed comparison with combined stellar evolution and pulsation models are presented, both for the O-rich and C-rich cases. We showed that synthetic photometry from these models allows us to reproduce the observed behaviour of first overtone pulsators in all our four luminosity indicators very well. As pointed out in other papers already, the linear pulsation models used in this work are not capable of reproducing the observations for fundamental mode pulsations. Our findings for massive AGB stars and red supergiants agree with former studies presented in the literature. Based on this, we agree with earlier claims that the majority of stars in this group pulsate in fundamental mode. However, the amplitude of these fundamantal mode pulsating red supergiants never reaches the large values exhibited by intermediate-and low-mass stars during their final pulsating Mira stages. In contrast to the P-K s diagram and similar to the situation for intermediate-mass stars, the P-W BP,RP diagram based on Gaia data shows this group of stars as a smooth extension of sequence C. A few members of our group (d) seem to pulsate in first overtone mode according to their locations in the various PLDs. For the supergiants, as for the other mass ranges, results for the two Magellanic Clouds are qualitatively similar. Fig. 18. Period-amplitude diagrams of O-rich stars, colour coded by evolutionary group to which they belong. The green dots indicate faint AGBs and RGBs, the blue dots indicate low-mass AGBs, and the cyan squares indicate intermediate-mass AGBs. The massive AGBs and RSGs from branch (d) are shown as orange/red diamond symbols according to whether they belong to sequence C'/C in the P-W BP,RP diagram. The three panels present the amplitudes in G, G BP , and G RP , respectively, taken from the magnitude ranges published in Gaia DR2. respectively. Bolometric correction relations were computed by comparing relative bolometric magnitudes from a fit to multiband photometry from the optical to the far-infrared range (see Kerschbaum et al. 2010, for details) with Gaia G photometry as BC(G) = m bol − G. The resulting BC values are thus negative and need to be added to G. The Gaia photometry is thereby a median of several measurements, while the ground-based multiband photometry is typically a single-epoch measurement at a random phase. This difference plays a negligable role for the semi-regular variables and supergiants, but may lead to a widening of the relations by the large amplitude Miras. We further note that neither ground-based nor Gaia photometry was corrected for interstellar reddening. However, this simplification seems acceptable since we included only nearby LPVs in our study and since the fit of the bolometric flux was dominated by the nearand mid-infrared bands, which are much less affected by extinction. For 678 M-type giants we derived a BC(G) versus G BP -G RP relation BC(G) = 0.790 − 0.953 (G BP − G RP ) (A.1) The reference objects used and the fitted relation can be seen in Fig. A.1. We decided for third degree polynomial fit as it allowed us to reproduce the observed bending for very red objects. The sample used for the computation of the relation limits its applicability to a colour range 1.5 < G BP − G RP < 7.5. This range covers the colour range of the LPV candidates in the Gaia DR2 catalogue very well (see Fig. 13 of Mowlavi et al. (2018)). For the very few targets outside of this G BP -G RP range it seems feasible to extrapolate the derived relations. The uncertainty of the BC is increasing with colour owing to the effects of variability and circumstellar reddening, which both increase for redder objects. To quantify this uncertainty, we divided the G BP -G RP colour range into bins of 0.1 mag width and computed the standard deviation σ of the BC value within each of these bins. Then we made a linear fit of σ as a function of G BP -G RP . The uncertainty of the BC for an M-type giant of a An interpolation of the uncertainty to colours outside the range of validity mentioned above is problematic owing to the linear relation used in this work. To avoid getting negative error values for the bluest LPV candidates, we suggest setting the uncertainty to an arbitrary value of 0.01 mag for stars with G BP -G RP <1.5. In the same way we derived the following relation and uncertainty estimates for 139 C-type giants: The data for the C-star sample and the fitted BC relation are presented in Fig. A.2. Obviously, the relation is very different from that for the M giants. There is a very steep decline towards the red end. Even though we have to accept a somewhat higher uncertainty, the C-type Miras in our sample allow us to constrain the relation quite well. Differences to M-type stars occur also on the blue side of the relation, which makes a clear distinction of stars due to their chemistry indispensable for applying a proper BC. Finally, we constructed a BC relation for the red supergiants (Fig. A.3). While the number of reference objects available is comparably small, they seem to define a second order polynomial relation reasonably well, i.e. Because of the comparably small number of reference objects we decided not to derive an error function like in the previous two samples of O-and C-rich stars. Instead, we estimate a typical uncertainty of the BC for supergiants of 0.3 mag. Tests for G − G RP versus BC(G) and G BP − G versus BC(G) relations were done, but G BP − G RP is preferred because the Gaia BP and RP wavelength ranges have a very limited overlap, which is not the case between the G and BP, or the G and RP filters. However, these alternative relations could be constructed for cases where measurements in one filter are missing.
13,616.4
2019-09-17T00:00:00.000
[ "Physics" ]
Examining the Quality of Electronic Services and Its Relationship with User Satisfaction in Social Security Organization (Branch 17) Given the importance of service quality in service organizations, this article attempted to analyze the impact of perceived quality of electronic services on user satisfaction with the provided services. To do this end, a questionnaire was designed and implemented to measure the satisfaction level of electronic services provided by Social Security Organization. It should be noted that the mentioned questionnaire was based on diverse dimensions of quality of electronic services offered in the form of a conceptual model (including usability, information quality and service interaction). This research was a survey study and it made use of a random sampling method thereof. Having collected data from a sample of 250 users of electronic services of the Social Security Organization (Branch 17) as well as doing correlation and linear regression tests, it was indicated that the perceived quality of electronic services has had a statistically significant and positive impact on user satisfaction with these services. 1. The bureaucratic mazes are among the major problems of developing countries and, thus, the quality of service provided to users does not receive much attention.Since our country is famous for being bureaucratic in nature, the subject of honoring customers has been repeatedly discussed in recent years and it has been a lot of publicity.In such a situation, many organizations argue that they have run electronic government throughout their own organizations.Accordingly, the question is whether the mere claim that we provide electronic services to our customers is enough.It seems that the logical approach is to review and assess these claims because a claim or even providing such services is not enough and that the quality of these services is important as well.Thus, how these services are offered and at what level and to what extent the objective is met, which is the user satisfaction, must receive due attention.In this regard, it is important to discover whether the electronic services provided by organizations to users have the desired quality or they can fulfill user satisfaction or not.Alternatively, governments may take advantage of information and communication technology and electronic government to improve and revitalize service quality.The main channel of providing electronic services is the organization's website through which all the services are provided.This is where the measurement of service quality goes beyond the organization's physical environment and building and it enters into the virtual world of websites.So, this major change in the way services are provided will lead to change in the manner of measurement and some new measurement indicators are required to assess and evaluate these types of services.As such, this article, which is derived from a systematic research, attempted to assess and evaluate the status of electronic services provided by the Social Security Organization and determined the extent to which users were satisfied with the service quality. Service quality Service quality includes a comparison between expectations and performance.In other words, service quality is "the difference between customers' expectations and perceptions of received services" (Zeithaml, Valarie, Parasuraman and Malhotra, 2002).Alternatively, service quality is the extent of achieving a given service.In this regard, the quality of objective and subjective services can be identified as follow: • The objective of service quality is to measure the tangible compliance of work results with the previously defined benefits.Since there is a significant interdependence between measurement and precision of definition, the criterion of quality measurement becomes subjective thereof.• Subjective service quality is the compliance of work result with the expected benefits.This perception is the result of initial customer's perception of services and the tendency of service provider to represent premier performance.• Although researchers have repeatedly investigated the concept of service quality over the last decades, there has been no convergence and agreement on the concept of service quality.The latter may be due to this fact that, until now, many researchers have focused on different aspects of service quality.All in all, the common ground of such research is that since these are intangible and homogenous services and the deployment of them is often separable, the process of evaluating the service quality is exceptionally combinatory in nature. As such, this process cannot be easily detected (Parasuraman, Zeithaml and Malhotra, 2005). Electronic service quality and its measurement Governments may take advantage of information and communication technology and electronic government to improve and revitalize service quality.The main channel of providing electronic services is the organization's website through which all the services are provided.This is where the measurement of service quality goes beyond the organization's physical environment and building and it enters into the virtual world of websites.So, this major change in the way services are provided will lead to change in the manner of measurement and some new measurement indicators are required to assess and evaluate these types of services. The concept of quality is defined differently in business literature and it can be examined from different aspects.Regarding the viewpoint of producers, it may be argued that quality is the ability of a given product to accomplish those functions for which it is designed.Regarding the customer side, the quality refers to those features and characteristics of the product or service that affect its ability to create satisfaction among users.Formerly, the quality was considered synonymous to satisfaction and they were used interchangeably.However, it is presently believed that the two concepts are quite different in terms of meaning and measurement criteria.Satisfaction is a broader concept than quality and it may be argued that quality is among the factors that lead to customer satisfaction. Customer satisfaction is to assess whether the product or service has been able to meet his/her needs and expectations or not (Iran Nejad Parizi, 2005).In fact, customer satisfaction is a response to his/her satisfaction and it represents a judgment about the features of a product or service or the nature of that product or service that indicates an optimal level of satisfaction associated with consumption (Shankar, Smith and Rangaswamy, 2003).The result of such a satisfaction may be represented in the form of customer loyalty, repurchase and recommending others to use the company's products and services (Zeithaml, Valarie, Parasuraman and Malhotra, 2002). As mentioned, the quality (of services or products) is one of the factors that affect the user satisfaction and as the customer perception of the quality of the product or service increases, the level of customer satisfaction and loyalty and, thus, their possible purchase increases thereof.Previously, the majority of scientific studies in the field of quality have been focused on the quality of physical products.However, in recent decades, the concept of service quality has included such aspects as intangibility, inseparability, non-maintainability and heterogeneity.It should be noted that service quality has been the dominant and determining element in customer satisfaction. E-service quality measurement models One of the most important features of models on electronic service quality classification is that they are mainly focused on features of service quality, level of information provided, the manner and some features of the system.Another feature of these classifications is that they lead to such results that are derived from merger, adaptation and development of existing models.As mentioned, the main channel of providing electronic services is the organization's website through which all the services are provided.There are three types of situations in the field of electronic services that require evaluation.The first one is the electronic environment.The second one belongs to the realm of evaluation of performance of an electronic program or project.The third domain deals with the overall effect of electronic services on the overall performance of an institution, economic development and public services (Bhattacharya, Ecker, Olsson and Schipper, 2012a). Regarding the academic research, there are some main models for measuring the quality of electronic services, as follows: Web-Qual Model Loiacono, Watson and Hoodhue (2002) made use of twelve original constructs to measure the process of development and validation of the quality of a website.They attempted to make use of "The Theory of Rational Action" and, thus, made use of information technology in the "Technology Acceptance Model".Thus, the "Technology Acceptance Model" has been adopted from "The Theory of Rational Action".The Theory of Rational Action makes use of different variables to predict the behavior of individuals in certain circumstances.According to this theory, the behavior of each individual stems from his/her intention and these intentions are also subordinate to his/her attitude and internal norms. Having conducted some research, Loiacono, Watson and Hoodhue (2002) concluded that four aspects of usefulness, ease of use, attractiveness and friendly relations might be used to assess the quality of information services websites.Furthermore, the mentioned aspects included 12 indexes. Website Quality Model Website Quality Model (Zhang and Prybutok, 2005) is proposed for a set of qualitative factors in web design quality and it is divided into specific categories and features.As such, each of these features guides the formation of customers' expectations towards designing a website.Besides, these categories include information content, cognitive consequences, entertainment, privacy, user support, exhibitive appearance, technical support, guidance, organizing the information content, credibility and impartiality.There is a questionnaire through which each user evaluates (basis, performance or provocation) each assessed category.Although this model pays special attention to design and usability of website, other factors such as user interaction and website are also given due consideration.This model is used in various fields (electronic education, sale, etc.). E-Sequal Model E-Sequal is an assessment tool that merges strategies for managing customer relationship and human-computer interaction to design and evaluate electronic commerce situations.Furthermore, it aims to meet customers' expectations and provides optimal service quality as well as guidelines to protect not only the user in interaction with website during the process of electronic purchase, but also to establish connection at all points of possible contact between the user and electronic commerce and dealing with services.The model consists of a set of (initiated and subordinate to initiatives) requirements or solutions that remove or avoid those specific obstacles due to reduced customer perceptions of observed value.An obstacle is an aspect of electronic commerce environment which makes it tasteless, difficult, impossible or inadequate for users to achieve a positive overall experience (including such problems as accessibility, hidden costs, return of unclear information or information that is not readily available).The initiatives and subordinate initiatives are classified into the following three categories: initiatives prior to purchase (which are related to manner by which one decides to buy from a particular website), purchase initiatives (which are related to product or service selected by the customer in order to buy and check online) and initiatives after the purchase (after choosing a set of initiatives in dealing with services). E-Qual Model Having used the website, the expectations of the level of service provided by public sites will significantly increase.This model is founded on the basis of user perceptions of quality measured by its significance.Regarding this model, there are five determining factors of usability, design, information, trust and empathy which have been integrated into three factors of usability, information quality and service interaction.This model has been proposed by Barnes and Wyden to assess the quality of website.Furthermore, the model has been tested in many fields, including online bookstores, auction sites and electronic government.The E-Qual Model makes use of 23-item survey tool to solicit users' subjective perceptions.The analysis of survey data reveals that E-Qual represents the following three basic components: usability, information quality and service interaction.Besides, each of these components provides some specific conclusions for website provider. • Usability consists of such items as "ease of exploration" and "ease of operation learning".This element points to the dire need for deployment of usability tests to the website (Rosson and Carroll, 2002). • Information quality (i.e., "credible information,"correct information" and "timely information"), requires organizations to adopt defined content management practices (Delon and Mack Lyon, 1993).• The quality of service interaction deals with the manner in which an organization represents itself and accomplishes its work in a virtual world.An important factor in the service interaction is the concept of trust which is extracted from such item as "the importance of security of personal information".Therefore, the E-Qual Model is considered a comprehensive and verified framework to assess user perceptions of the quality of a website (Anderson and Srinivasan, 2003). • The E-Qual is based on quality function deployment (QFD) and this is a structured and systematic process. Furthermore, this is a tool to identify and deliver the voice of the customer through each stage of development and application of a product or service.QFD may be applied via adopting a "user's voice" and identifying the quality requirements using some meaningful words for users.Then, the quality is reflected by the customer and the basis of a quality assessment is formed for a product or service. • The E-Qual differs from those studies that emphasize on features or characteristics of the website.Regarding some studies conducted as part of subsequent processes in QFD in the context of E-Qual, the website users are asked to rank the desired sites against a set of qualitative norms and each qualitative norm is ranked in terms of importance.Although qualitative norms are designed subjectively in the context of E-Qual, there are not sufficient data analyses using quantitative techniques (i.e.running reliability tests for E-Qual tools). Customer Satisfaction Models 3. Kano Model The Japanese Professor, Noriaki Kano, is a renowned theorist in the field of world-class quality.He believes that the concept of quality is an integral part of any business and it is a key factor in global competition.Regarding the increasing scope of global competition, it seems that it is impossible to meet the needs of customers only through existing products.Accordingly, it requires producing and presenting some innovative and modern products to meet the customers' expectations and this depends on a detailed understanding of their changing needs and demands.Thus, the concept of quality is defined as follow: the quality points to responding to the needs, desires and expectations of the user (customer) and even going beyond his/her satisfaction. Electronic Customer Satisfaction Index (E-CSI) This model is based on three main antecedents of customer satisfaction (trust, electronic services and perceived value) and two consequences of customer satisfaction (customer complaints and loyalty).This model is derived from the ACSI model where customer expectations and service quality are replaced by customer loyalty and the quality of electronic services. Research background 4. Molavi (2009) conducted a research titled "a survey of relationship between electronic service quality and electronic satisfaction in banking system; the case of central branch of Agricultural Bank" and proposed the Zeithaml Model in which the quality of electronic services was proposed as a 7-item phenomenon (efficiency, fulfillment, reliability, personal privacy, responsiveness, compensation and contact).Then, she measured their impact on customer satisfaction and concluded that there was a significant relationship between these seven items and customer satisfaction. Golshani Mehr (2011) conducted a research titled "assessing the impact of electronic service quality and customer satisfaction on financial performance of Bank Saderat Iran".Having collected a sample of 187 customers of Bank Saderat Iran in some distinguished and first-class branches in Tehran and Alborz provinces, the subjects were asked to respond to electronic service quality and customer satisfaction.Given the first hypothesis, the analysis of the data showed that all dimensions of electronic service quality (except Internet Bank) have had significant impacts on customer satisfaction.Regarding the second hypothesis, it was indicated that only ATM service quality and perceived value have had significant positive effect on financial performance of Bank Saderat Iran.Results for the third hypothesis suggested that there was a significant positive relationship between customer satisfaction and financial performance of the Bank.The fourth hypothesis indicated that the ATM service quality and perceived value of services directly and indirectly (through customer satisfaction) influenced the financial performance of Bank Saderat Iran.Also, the quality of telephone banking services and the quality of basic services have only had indirect impact (through customer satisfaction) on the financial performance of Bank Saderat Iran.Hejazi (2005) conducted a research titled "measurement of satisfaction in Ramak Company by fuzzy approach" and attempted to review, identify, measure and prioritize the factors affecting customer satisfaction with the Ramak Company.The results showed that the variables of ease of access, complaint resolution, quality and price have had a determining impact on customer satisfaction.Molavi (2009) conducted a research titled "a survey of relationship between electronic services quality and electronic satisfaction in banking system; the case of central branch of Agricultural Bank of Tabriz" and concluded that there was a significant relationship between electronic service quality and its dimensions (efficiency, fulfillment, reliability, personal privacy, responsiveness, compensation and contact) and electronic customer satisfaction in the central branch of Agricultural Bank of Tabriz.Also, the quality of electronic services has had the most relevant connection with users' electronic satisfaction. Zhang and Prybutok ( 2005) did a research titled "a consumer perspective of electronic service quality" and made use of such variables as individual differences, facilities of electronic services, the quality of website services, risk-taking, electronic satisfaction and intention to develop the model for electronic services.Regarding these factors, service facilities, service quality, website and risk-taking were among the most determining factors affecting the level of consumer satisfaction.Furthermore, ease of electronic services, perceived risk and website quality were ranked as factors affecting the customer satisfaction.In addition, it was indicated that there was a relationship between customer satisfaction and the decision to buy.Conversely, only the hypothesis on the relationship between individual abilities in using computer and the ease of use of electronic services was rejected.Finally, they concluded that the customer experience was associated with behavioral intentions and that as there was more positive customer experience, he/she would be more inclined to reuse the service. Conceptual frameworks and model 5. There should be a theoretical framework, called the conceptual model, to conduct scientific and systematic research.The theoretical framework is a conceptual pattern which is based on a number of theoretical relations among some factors that are important in the field of research.This theoretical framework is rationally envisaged via examining the research records in the territory of research subject.Accordingly, the conceptual model in this study is as follow (Diagram 1). Diagram 1. The research conceptual model The research hypothesis 6. The first hypothesis: There is a relationship between the usability of electronic services of the Social Security Organization and user satisfaction. The second hypothesis There is a relationship between the information quality of electronic services of the Social Security Organization and user satisfaction. The third hypothesis: There is a relationship between service interaction of the Social Security Organization and user satisfaction. Methodology 7. Regarding the operational criterion, this study was an applied research because it was done in a real organization and it was used in real life.Given the temporal criterion, this study was of cross-sectional nature.Regarding the data collection method, this study was categorized as document and field studies because books, article, etc. were used to propose the theoretical framework and research background.Furthermore, questionnaires were used to collect the required data.Given that this research involved collecting information directly from a group of individuals and since the results of the sample would be generalized into the entire population, it could be argued that it was a survey study (Mirzayi, 2009). A researcher-made questionnaire was used as a data collection tool to gather the desired data.The questionnaires were designed in the form of closed questions in the following two parts: the first part included questions about gender, age and level of education of the respondents and the second part consisted of 28 items about the quality of electronic services of the Social Security Organization and user satisfaction.Regarding the latter part, the respondents were asked to comment about any item by choosing one of the options of "strongly agree, agree, no idea, disagree and strongly disagree".As such, the Likert scale was used to measure the constructs.Regarding the grading scheme, it was decided to assign numbers 5 to 1 for the first option (strongly agree) to the last option (strongly disagree), respectively.Each of the variables was assessed using a number of items of questionnaire and the total score of items for each variable was determined to represent the score of each variable. Validity The content validity and CVR coefficient were used to ensure the quality of the research tool.Since the value of CVR coefficient was greater than the minimal level of 0.6, it could be argued that all items had acceptable level of validity.The total CVR coefficient for all items was 20.8.To do this end, the mentioned total value was divided into total number of questions and, thus, the total value of CVR coefficient was obtained as 0.74 which indicated that the items of the questionnaire had an acceptable level of content validity. Reliability The internal consistency reliability (Cronbach's alpha) was independently used for each variable to ensure the stability of the measurement tool.The results of Cronbach's alpha for the quality of electronic services perceived by users and user satisfaction were 0.874 and 0.7, respectively.Furthermore, the Cronbach's alpha for the total scale was 0.864, which was an acceptable value.As the value of alpha moves closer to 1, there will be more level of internal consistency.The statistical population of this study was all users of electronic services of the Social Security Organization (Branch 17) on May 2014 (N= 700).Thus, the sample size was determined at (e) level of accuracy and equal to 0.05 and at 95% confidence level.It should be noted that the latter levels were determined in accordance with Table 9-2 in a book titled "research, scholarship and research writing" (Mirzayi, 2009), which was itself adopted from Glenn D. Israel (2008).As such, the sample size was determined as 255 individuals. Sampling method Those individuals who were using electronic services of Social Security Organization (Branch 17) on Saturdays, Mondays and Tuesdays in May 2014 were randomly selected to respond to the research questionnaires. Data analysis 8. Processing and analyzing the data is very essential to verify hypotheses in any type of study.Nowadays, the majority of research that relies on data collected from the subject pays special attention to analysis of data as one of the most determining and important parts of any given research.Raw data are processed via statistical techniques and, then, these processed data are placed at the disposal of users.Regarding this study, descriptive statistics were used to characterize samples and linear regression analysis was used to analyze the relationship between the variables. Descriptive statistics Regarding the first part of statistical analysis, the statistical distribution has been characterized in terms of gender, age, education, job and the level of using electronic services (Table 2). The analysis of correlation between research variables Before the model was examined in terms of structural equations, it was decided to make use of Pearson correlation coefficient to analyze the correlation between the research variables.The results (Table 3) showed that there was a significant positive correlation between all dimensions of quality of electronic services (independent variables) and user satisfaction (the dependent variable).The highest correlation was observed between service interaction and user satisfaction (0.661%).Also, there was a positive correlation between usability and user satisfaction (0.646%).In addition, there was a significant positive correlation between information quality and user satisfaction (0.637%). The results of regression analysis As seen in stepwise regression analysis (Table 4), three variables of usability, information quality and service interactions have been multiply correlated with user satisfaction.Regarding the order of importance of predictor variables in the stepwise regression analysis, the first step belonged to the multiple correlation coefficient between service interaction and user satisfaction (0.661).The second step belonged to the multiple correlation coefficient between usability and user satisfaction (0.721).The added multiple correlation coefficient for usability was 0.06.Finally, the third step belonged to the multiple correlation coefficient between information quality and user satisfaction (0.73).The added multiple correlation coefficient for information quality was 0.007 thereof.In sum, these three variables were able to account for 53.2% of the variance in user satisfaction in which 43.6% was related to service interaction, 8.4% was related to usability and 1.2% was related to information quality.As can be seen in Table 5, variance analysis approved the reliability of stepwise regression analysis in predicting users satisfaction (F=95.297and P<0.001).As can be seen in Table 6 and regarding the stepwise regression analysis, three variables of service interaction, usability and information quality entered into the regression equation to predict user satisfaction.The service interaction (standard coefficient beta=0.343),usability (standard coefficient beta=0.295)and information quality (standard coefficient beta=0.181)have had a significant predicting power in terms of user satisfaction (P<0.001),respectively.These standard beta coefficients meant that one unit change in the service interaction, usability and information quality led to 0.343, 0.295 and 0.181 units change in these variables in relation to user satisfaction.According to Table 6, the final regression equation for user satisfaction was as follow: 6.212 (User satisfaction) =0.143 (information quality) + 0.198 (usability) + 0.29 (service interaction).Regarding the outcome of the first hypothesis of the study, it is indicated that there is a direct relationship between usability and user satisfaction among users of electronic services of the Social Security Organization at 0.95 statistical level.In other words, the component of usability can be used as a determining factor in the field of electronic service quality to affect user satisfaction with the electronic services of the Social Security Organization at 0.95 statistical level.Usability indicates the extent of convenience, ease of access and efficiency of any given website.In general, usability of any given website increases user satisfaction (Jakob Nielsen, 2012). Regarding the outcome of the second hypothesis of the study, it is indicated that there is a direct relationship between information quality and user satisfaction among users of electronic services of the Social Security Organization at 0.95 statistical level.In other words, the component of information quality can be used as a determining factor in the field of electronic service quality to affect user satisfaction with the electronic services of the Social Security Organization at 0.95 statistical level.Information quality, which includes confidentiality of individuals' information, understandability and usefulness of the information contained on the website, connectivity to webmasters, etc., increases user satisfaction (Mina and Anio, 2005). Regarding the outcome of the third hypothesis of the study, it is indicated that there is a direct relationship between service interaction and user satisfaction among users of electronic services of the Social Security Organization at 0.95 statistical level.In other words, the component of service interaction can be used as a determining factor in the field of electronic service quality to affect user satisfaction with the electronic services of the Social Security Organization at 0.95 statistical level.If customers receive services in accordance with their expectations, they will be satisfied.When a customer finds that the organization provides high-quality, acceptable and timely services, he/she will decide to repeat his/her act in this regard.Therefore, the interaction of electronic services will increase user satisfaction (Harrison, 1994). • The results of this study indicate that the usability of electronic services and user satisfaction are positively correlated.Therefore, it is suggested that organizations adopt necessary strategies and policies to increase the usability of electronic services, including increasing the ease of learning how to work with the website, establishing clear contact with the website, increasing ease of browsing in the website, facilitating use of the website and increasing the attractiveness of website to increase the user satisfaction. • The results of this study indicate that the information quality of electronic services and user satisfaction are positively correlated.Therefore, it is suggested that organizations increase the quality of information provided, including website information, credibility of website information, timeliness of website information, the connection between website information and its function and ease of understanding information website so that they can implement such a policy in order to attract user satisfaction.• The results of this study indicate that the service interaction of electronic services and user satisfaction are positively correlated.Therefore, it is suggested that the organization increases its effort to enhance the public profile of website among citizens, the level of transaction security via the website, the security of personal information on the website, the ability to personalize websites so that user satisfaction can be increased thereof. Table 3 . Correlation between variables Table 4 . Correlation coefficients, square of the multiple correlation coefficient, modified correlation coefficient and standard error estimates adjusted for user satisfaction Table 5 . The analysis of regression variance in predicting user satisfaction Table 6 . Standard and non-standard regression coefficients for user satisfaction
6,574.4
2016-06-06T00:00:00.000
[ "Computer Science", "Business" ]
Facilitation and Dominance in a Schooling Predator: Foraging Behavior of Florida Pompano, Trachinotus carolinus Presumably an individual’s risk of predation is reduced by group membership and this ‘safety in numbers’ concept has been readily applied to investigations of schooling prey; however, foraging in groups may also be beneficial. We tested the hypothesis that, when feeding in groups, foraging of a coastal fish (Florida Pompano, Trachinotus carolinus) on a benthic prey source would be facilitated (i.e. fish feeding in groups will consume more prey items). Although this question has been addressed for other fish species, it has not been previously addressed for Florida Pompano, a fish known to exhibit schooling behavior and that is used for aquaculture, where understanding the feeding ecology is important for healthy and efficient grow-out. In this experiment, juvenile Florida Pompano were offered a fixed number of coquina clams (Donax spp.) for one hour either in a group or as individuals. The following day they were tested in the opposite configuration. Fish in groups achieved greater consumption (average of 26 clams consumed by the entire group) than the individuals comprising the group (average of 14 clams consumed [sum of clams consumed by all individuals of the group]). Fish in groups also had fewer unsuccessful foraging attempts (2.75 compared to 4.75 hr-1) and tended to have a shorter latency until the first feeding activity. Our results suggest fish in groups were more comfortable feeding and more successful in their feeding attempts. Interestingly, the consumption benefit of group foraging was not shared by all – not all fish within a group consumed equal numbers of clams. Taken together, the results support our hypothesis that foraging in a group provides facilitation, but the short-term benefits are not equally shared by all individuals. Introduction Many fish species form groups at some time during their life history and group behavior serves a variety of functions in different systems. Fish groups are termed 'shoals' when fish are loosely organized and 'schools' when coordinated swimming occurs [1]. In teleosts, schooling behavior is dictated by two main keys: predators and food [2][3][4]. A balance between the two seems to be maintained by schooling fish, but it can shift depending on prey distribution, suggesting that predator defense mechanisms do not necessarily take precedence over feeding (e.g. [5]). But, predator defense is the well-hypothesized function of schools, stemming from concepts related to safety in numbers [6,7], the dilution effect (e.g. [8]), and heightened predator surveillance [7,9,10]. Nevertheless, several advantages may be conferred by group foraging. Fish in groups have been shown to increase search efficiency (reduced search times) and allocate more time to feeding (e.g. [9]), exhibit sampling behaviors (i.e. sampling food patches of different quality [11]), alter feeding strategies to maximize energy efficiency (e.g. [12]), hunt collaboratively for mobile prey [13], and engage in passive information transfer and forage area copying behaviors (reviewed by [9]). Previous research has examined feeding responses in relation to schooling behaviors in a variety of fish species, with many results suggesting increased foraging success in groups (e.g. three-spined sticklebacks [10], Australian salmon [14], walleye Pollock [15]). However, similar studies have not been applied to Florida Pompano when feeding on a natural benthic prey source; most previous Florida Pompano group-feeding experiments have been in regards to evaluating feeding efficiency for commercial aquaculture. If indeed group membership promotes foraging success, then schooling by fish predators, like Florida Pompano, may be one of several mechanisms used to cope with difficult foraging situations. Sandy beach environments, where Florida Pompano are regularly observed, could be considered one such complex foraging habitat. At the interface of sea, land, and air, sandy beach slopes are a stressful environment-few systems compare in terms of physical stability or biological structure [16]. Consequently, prey organisms inhabiting beach slopes have a generally high mobility and the ability to burrow rapidly [16]. Intuitively, fish that feed on these organisms must then in turn have developed mechanisms to successfully forage on mobile, burrowing prey. Many foraging strategies of schooling fish focus on the location of prey [14], and this is especially important when schooling predators are foraging on mobile aggregations that may only be briefly available [15]. However, this concept may also apply to other predator-prey relationships. The Florida Pompano (Trachinotus carolinus) is a fast-swimming schooling predator found in the beach surf zone. Juvenile Florida Pompano will "surf" up the beach slope in shallow water to capture prey items [17], many of which are coquina clams (Donax spp.) [18]. Coquina clams exhibit a unique behavior called 'swash-riding' wherein the clams emerge from the sediment and ride beach waves in synchrony with the tides [19]. During exposure, Florida Pompano will forage on coquinas. In this sense, coquina clams are both mobile and only briefly available when moved by wave activity, because Florida Pompano will not dig for clams once they have burrowed [17]. Although only briefly available, the clams' emergence from the sediment and presence in the water column may be predicted by local wave activity. Multiple theories of foraging facilitation by schooling fish are based on the patchy distribution of prey items (e.g. ephemeral prey schools). Because both replicating the natural behavior of coquina clams in the surf zone and conducting manipulative experiments in the surf zone is difficult, we approached the theory of group foraging facilitation differently. Here, we assessed whether the previous conclusions of foraging facilitation still hold with a natural prey item that is presented more uniformly and is present throughout the experimental trial (e.g. not a pulse of prey). In this sense, we assessed foraging behavior after a prey patch had been located. Coquina clams, frequently seen in large aggregations, can be one of the dominant fauna on exposed sandy beaches [20]. Therefore, once a coquina clam prey patch is located by a fish predator, the clams may potentially be perceived as an abundant and more uniformly distributed food supply. Our mesocosm experiment is also different from previous work because we used a paired test design wherein behaviors of fish foraging alone could be compared with their complementary behaviors in a group. Specifically, we hypothesized that foraging by juvenile Florida Pompano would be facilitated in groups (i.e. fish feeding in groups would consume more prey items). Predator and prey species Juvenile Florida Pompano (Trachinotus carolinus), 17-25 cm TL (total length), were the predator species used in experimental trials; they were obtained from Claude Peteet Mariculture Center (Gulf Shores, Alabama, USA), which had obtained the fish from Proaquatix (Vero Beach, Florida, USA) when they were 0.33 g (ca. 2.54-3.18 cm TL). At the mariculture center the fish were used in nutritional studies examining the effects of the number of feedings per day with pellet food and diet supplements. Fish were transported to the Dauphin Island Sea Lab (DISL, Dauphin Island, Alabama, USA) in August 2013 and were housed in an outdoor flow through mesocosm tank (diameter = 2.36 m; water depth ca. 0.6 m) for 3 months before experiments commenced. All fish remained in this tank until the night prior to their first use in a trial and were returned to this tank following trials. Florida Pompano were fed once daily a mixed diet of cut fish and squid, live coquina clams, and pellet food. The prey species, coquina clams (Donax spp.), were collected in November 2013 from multiple local beaches near Dauphin Island, AL. Multiple beaches were necessary to obtain enough prey items for both experimental purposes and general husbandry feeding. In AL, no sampling or collection permits are required for non-managed invertebrates; therefore, coquina collections were in accordance with the laws of the state of AL. Clams were collected from the swash zone of Gulf-facing beaches using a mole crab rake (also known as a triangle sand flea rake). Coquina clams represent a significant component (up to 58%) of wild pompano diets in the area [21]. Clams were sorted so only individuals 1.2-1.6 cm in length (anterior to posterior, mean = 1.5 ± 0.002 cm) were used in experimental assays. Additionally, clams with epibiotic hydroids present were not used since Manning and Lindquist [17] reported that Florida Pompano select against clams with hydroids. Feeding experiments Experiments were conducted in a set of three, recirculating indoor mesocosm tanks, unregulated for temperature (18.6 ± 0.27°C), but regulated for salinity (22.9 ± 0.15psu). Water was pumped in from Mobile Bay, AL and filtered. All trials were conducted within 8 days to minimize differences in water parameters. A fourth indoor tank was used as an overnight holding tank for the three fish involved in trials on any given day. All indoor tanks were 1.1 m in diameter with approximately 0.35 m water depth. No sand was placed on the bottom of the tanks because sand would allow the clams to bury and the Florida Pompano would not feed. All fish were fed 24 hr prior to trials. The day prior to an assay, three fish were randomly selected from the outdoor flow through mesocosm tank and relocated to the indoor holding tank where lights were kept to a 12 hr light-dark cycle and fish could acclimate ca. 12 hr. The following morning, the fish were randomly tested either as individuals or as a school. Before the start of a trial, the number of tanks necessary for assays (three for individuals or one for a school) was stocked with 60 coquina clams, haphazardly placed in the tank. Preliminary trials indicated pompano would eat up to 10 clams in 1 hr so 60 clams was chosen to equate to approximately 50% of clams being consumed when 3 fish were present. Pompano were then moved to the appropriate tank(s), allowed to forage freely, and the trial was run for 1 hour. After 1 hr, the fish were removed and placed back into the overnight holding tank. The remaining coquina clams were collected from the experimental tanks and counted. The experiment was repeated the following morning with the same fish placed in the opposite configuration (Fig 1). For example, if the fish were tested as a school on day 1 then they were tested as individuals in their own tanks on day 2. After each group of fish was tested together and individually, they were measured and tagged (to ensure fish were not used again) and returned to the outdoor flow through tank. This experimental procedure was repeated four times. All experimental trials were conducted between 0730 and 0900 because Florida Pompano feed during daylight hours [22]. Trials were recorded with a GoPro Hero2 or Hero3 camera mounted 15 cm above the center of the tank. To assess recovery efficiency of clams, we performed three trials in which a known number of clams was stocked and then recovered after an hour-recovery of clams was 100%. This experiment was conducted in accordance with animal care protocol #638305 and approved by the Institutional Animal Care and Use Committee at the University of South Alabama. Efforts were made to minimize stress and suffering in animal housing and experimental conditions. Data analysis Because our hypothesis was directional (facilitation), a one-tailed paired t-test was used to compare the following three foraging actions: (1) the number of successful foraging attempts (the number of clams consumed), (2) the number of unsuccessful foraging attempts (the number of clams crushed, but not consumed), and (3) the number of attacks (the number of times fish picked up a clam and then rejected it, without consuming or crushing). The data presented here are for the minimum number of attacks, as some fish were observed to take in and reject clams repeatedly. Therefore, we have underestimated the number of attacks but the underestimation is likely similar for both individuals and schools. The number of successful and unsuccessful foraging attempts, as well as the number of attacks, for fish in groups was compared to the sum of the respective activity for the three individuals comprising the group. Observations from video data allowed for analyses pertaining to the timing of feeding activities as well as the activities of each individual within a group. A one-tailed paired t-test was used to compare the time until the first feeding event between individuals and groups. For groups, the time to first feeding was considered the time elapsed between the start of the trial (when all fish were in the experimental tank) and the first feeding activity (i.e. a fish picked up a clam). For individuals, the average time to first feeding among the three fish was calculated. If any given fish did not eat during a trial, a time of 60 min was assigned as the latency to first feeding event. Lastly, to determine whether all fish within a group foraged equally, a chi-square analysis was used to compare the observed number of clams consumed to the expected number of clams for each individual. Results Feeding attempts by juvenile Florida Pompano resulted in one of three ultimate outcomes after the clam was taken into the mouth of the fish: (1) consumption of the clam (= successful foraging attempt), (2) crushing the clam but not consuming it (= unsuccessful foraging attempt), and (3) rejection of the clam without crushing the clam (= attack); Florida Pompano frequently took a clam into their mouth and rejected it, either very rapidly (almost immediately) or after several seconds. The number of clams consumed by juvenile Florida Pompano foraging in groups of three fish was greater than the sum of clams consumed by the individuals comprising their respective group (t = 2.41; p = 0.047) (Fig 2A). Consumption in groups was approximately twice the consumption of all individuals (26 vs. 14 clams). Fish foraging in groups tended to have fewer unsuccessful foraging attempts (t = -1.19; p = 0.16) (Fig 2B) and performed more attacks than fish feeding alone (t = -7.49; p = 0.009) (Fig 2C). Individual fish tended to allow more time to elapse before engaging in their first feeding activity (t = -2.79; p = 0.054) (Fig 2D). For the three groups of fish with complete 1 hr videos to accompany the trial, foraging was not equal among all fish within the group (χ 2 = 14.13; p = 0.007) (Fig 3). One group ate fairly equally (8-11 clams per fish), another had one fish that consumed the majority of the clams in the trial (15 clams) while the other two consumed very few (2 clams each), and the third group was a hierarchy-one fish ate 25 clams, another ate 16 clams, and the third fish ate 3 clams. Foraging facilitation in groups Juvenile fish foraging in groups were more successful than those foraging alone, and also seemed to be more comfortable engaging in feeding activities. Herein, we consider 'comfort' to be a relative measure, meaning that fish were observed to have less of a startle response, and generally consumed and/or attempted to consume more prey items. Groups of juvenile Florida Pompano consumed more clams, performed more attacks, and tended to leave fewer crushed clams behind. This suggests juvenile Florida Pompano are both more comfortable and successful foraging in groups-the greater number of attacks (even though the clams were not ultimately consumed) while foraging in groups is likely a result of increased foraging activity-fish in groups are more likely to engage in foraging activities. Leaving fewer crushed clams behind while in groups suggests fish are more likely to take the time to complete the feeding event as opposed to picking up the prey item and then rejecting it. Our result of increased foraging in groups is consistent with many other teleost studies (e.g., [10,14,15]). Increased foraging may also be the result of allocating more time to feeding when in groups e.g., [23]) and appears to be correlated with the number of individuals in a groupgreater foraging success and more time allocated to feeding have been reported as group size increases (e.g., [10,14,23,24]). Foraging facilitation may also occur among species, as reported by Pereira et al. [25], wherein bucktooth parrotfish were observed to take advantage of nearby sailor's grunt schools to feed inside the highly defended territory of damselfish, thereby facilitating access to a food resource that would not normally be accessible to the parrotfish alone. But, note that other studies have reported opposite results, with less foraging success or lower feeding rates when in groups and greater foraging as solitary individuals. Furthermore, sometimes the foraging patterns are different among fish life stages. For example, four species of Haemulon adults were observed to have higher foraging rates when solitary as opposed to when schooling on a Brazilian coral reef [26]; but juveniles of these species did not have a clear foraging pattern between solitary and schooling individuals. Likewise, observed feeding patterns differed among life stages of Haemulon flavolineatum in mangroves and seagrass beds, wherein sub-adults showed no pattern between schooling and solitary individuals, large solitary juveniles spent most of their time foraging while schooling ones mainly rested, and small juveniles in seagrass beds mainly foraged when schooling [27]. In summary, patterns of fish foraging behavior in regards to schooling vs. solitary individuals can differ among species and among size or age-classes within species. Our experimental results indicated that juvenile Florida Pompano foraging in schools were more successful than those foraging individually. In this experiment, our results suggest increased feeding in groups is a true response, and not a learned response. The fish used in experimental trials were naïve-no experimental fish were previously used for preliminary work. Juvenile Florida Pompano were also only used twice, each time for only an hour, minimizing the time available to 'learn' in experimental conditions. Furthermore, individual fish did not always consume more clams the second time they were in the experimental tank, regardless of whether they were tested in a group or as individuals first. Therefore, we do not believe that results were impacted by fish learning. Additionally, we are aware that the group size (3 fish) in this experiment was small and may be a caveat to some conclusions, but we believe the results are still robust and potentially applicable to Florida Pompano populations in this area of the north-central Gulf of Mexico. Although we used the minimum school size, the Florida Pompano used in these experiments did show schooling behavior with just 3 fish in the tanks. Furthermore, Florida Pompano in this region of the Gulf coast are typically not captured in large schools (as is the case in some areas of Florida). Indeed, catch-per-unit-effort of Florida Pompano never exceeded 5 fish hr -1 in two years of standardized gillnet surveys (M. Schrandt, unpublished data). Lastly, we had to consider the size of the experimental tanks to be sure that Florida Pompano were not crowded and that the number of coquina clams placed in the tanks did not get so large as to nearly cover the bottom of the tank, thus not reflecting natural local coquina abundances as well. If a larger school size was used, however, we would predict that foraging success would increase with group size as has been reported previously (e.g., [10,14,23,24])-to an extent. At some point, we would expect the number of fish to exceed the food resources, resulting in competition that may ultimately lead to a plateau or a decrease in the number of prey items consumed by some individuals (see discussion below on individual behavioral variation and the dominance effect). We also observed a trend toward greater latency until feeding for fish feeding alone. We speculate the trend would have been statistically significant if one group had not waited 40 min until feeding-all other groups began feeding activity within 20 min, averaging 12.5 min. This is in opposition to individuals, who waited approximately twice as long as groups to initiate feeding activities. Engaging in feeding activities earlier when in groups is similar to previous group-foraging experiments (e.g. [9,10,28]). The shorter time until feeding, combined with greater consumption and less crushing, further supports our conclusion that juvenile Florida Pompano are more comfortable and successful foraging in groups. Various hypotheses, varying in mechanism, apply to an overall increase in feeding rates in larger groups, as was observed in this experiment. These hypotheses include intragroup competition, social facilitation, and vigilance sharing (reviewed by e.g., [29]). Intragroup competition forces individuals to feed quicker. Social facilitation suggests that the willingness of any individual to feed increases with the number of individuals feeding in the group. The vigilance sharing hypothesis allows for individuals within the group to share the time spent being alert for predators, increasing the time allowed for feeding. Our data are consistent with an overall facilitation of foraging but they do not suggest any particular mechanism. It is likely that greater foraging success observed in juvenile Florida Pompano in this experiment is due to a combination of multiple explanatory hypotheses, potentially intragroup competition and/or social facilitation since a predator of Florida Pompano was not included in the experiment. Lastly, our results indicate that juvenile Florida Pompano of similar size do not equally benefit from group-foraging. Intraspecific behavioral variation was observed during this experiment. There is a growing literature on individual variation in fish behavior, seemingly spanning all observable behaviors (e.g. activity, aggressiveness, shyness, boldness, exploration, avoidance, spawning, sociability) and the concept of behavioral syndromes in fishes is becoming more widespread (see review by [30]). For example, adult individuals of the yellow saddle goatfish Parupeneus cyclostomus may live solitarily (associated with searching for hidden, immobile prey items) or in groups where they exhibit collaborative hunting, with individuals performing different roles, to capture mobile prey items in corals [13]. Strübin et al. [13] examined the goatfish in their natural coral reef habitat, but here we were observing Florida Pompano in experimental tanks, where the escape response of the prey item was effectively removed-coquina clams were not able to bury into sediment. We do not believe the Florida Pompano were collaboratively hunting during this experiment because prey items were readily available on the bottom of the tank. Contrarily, the Florida Pompano may have been affected by social facilitation and/or intragroup competition (as mentioned above) to yield unequal foraging rates. It is important to consider whether fish are in competition with conspecifics because if so, activity levels (and hence, feeding) may depend on dominance rank [30]. In this experiment we observed an overall dominance effect where foraging in groups ranged from equal foraging among group members to a distinct hierarchy. Our results are similar to Milinski [31], who reported differing competitive abilities in sticklebacks (dominant fish ate 2-3 times more than subordinate fish) although they were presumed to be similar for experimental purposes. Whether this is a short-or long-term phenomenon is not known. We did not test the groups multiple times so we could not determine if the dominant fish in one trial remained dominant in subsequent trials. It is likely that this is a long-term effect of group foraging because dominance behavior resembles a positive feedback loop-dominance behavior will increase the feeding rate of dominant members and simultaneously reduce that of subordinate members [29], further increasing the disparity between dominant and subordinate individuals. Our results support the positive feedback loop characteristic because when dominance was present, the dominant individual(s) ate 2 times that of subordinate individuals. Maricultured vs. wild-caught fish We exercise caution with our interpretation because we used maricultured Florida Pompano. These fish were raised under hatchery conditions and were not held in isolation prior to arriving at our facility. Despite this, we believe the observed patterns reflect natural conditions because the feeding rates for individuals and groups (mean of ca. 5 clams/fish when alone and 10 clams/fish when foraging together) are in line with previous publications using wild-caught Florida Pompano. In our experiment, the number of clams consumed by individual maricultured fish when in groups was ca. 2 times greater than that for Lindquist and Manning [32] who examined the effect of turbidity on Florida Pompano foraging on coquina clams. Another experiment assessing potential preference for clams with or without hydroids present resulted in an average consumption of 7 clams within 5 min [17]. This would suggest a much higher feeding rate by wild-caught Florida Pompano (ca. 84 clams/hr); however, there was a fundamental difference in experimental procedures. Manning and Lindquist [17] added the clams to the tanks after the pompano were present. Since Florida Pompano are sight feeders and potentially perceived the prey as being available for a short time period, they likely went after the clams more readily. Captive Florida Pompano feeding rates appear to be context-dependent but the maricultured fish used in our experimental trials consumed an intermediate amount of clams. Furthermore, maricultured fish responded to visual cues, participated in foraging activities, and exhibited feeding rates similar to wild-caught Florida Pompano that were previously held at DISL (Schrandt, pers. obs.). Overall, we believe the patterns observed here (i.e. increased foraging when in groups) may generally reflect feeding patterns in wild populations since the maricultured fish fed, responded to visual cues, and had feeding rates within the range of published rates for wild-caught Florida Pompano of similar size. Conclusions In concert, our results suggest that groups facilitate foraging even after locating a prey patch and that juvenile Florida Pompano are more comfortable and successful foraging in groups. This provides new information on the foraging ecology of Florida Pompano, with both ecological and economic implications. No previous studies have addressed group-feeding behaviors of Florida Pompano foraging on the main contributor to their natural diet. Ultimately, schooling may facilitate juvenile Florida Pompano feeding activities along the sandy beach habitat, an area where presumably (1) feeding is difficult because of the dynamic environment and (2) little protection is available from predators. Furthermore, our results are applicable to Florida Pompano aquaculture. Because foraging (and hence growth) is facilitated by groups, Florida Pompano should be reared with conspecifics, preferably of similar size to potentially reduce the dominance effect we observed. Periodically size-separating fish during the rearing period could lead to more efficient growth and harvest of Florida Pompano.
6,042.4
2015-06-11T00:00:00.000
[ "Biology", "Environmental Science" ]
Integrating Intra-Speaker Topic Modeling and Temporal-Based Inter-Speaker Topic Modeling in Random Walk for Improved Multi-Party Meeting Summarization This paper proposes an improved approach of summarization for spoken multi-party interaction, in which intra-speaker and inter-speaker topics are modeled in a graph constructed with topical relations. Each utterance is represented as a node of the graph, and the edge between two nodes is weighted by the similarity between the two utterances, which is the topical similarity, as evaluated by probabilistic latent semantic analysis (PLSA). We model intra-speaker topics by sharing the topics from the same speaker and inter-speaker topics by partially sharing the topics from the adjacent utterances based on temporal information. For both manual transcripts and ASR output, experiments confirmed the efficacy of combining intra- and inter-speaker topic modeling for summarization. Introduction Speech summarization is very important [1], because multimedia/spoken documents are more difficult to browse, and it has been actively investigated before.While most work focused primarily on news content, recent effort has been increasingly directed to new domains such as lectures [2,3] and multi-party interaction [4,5,6].We take meeting recording as multi-party interaction and do experiments on this dataset, where we perform extractive summarization on ASR and manual transcripts [7]. For text summarization, many approaches focus on graph-based methods to compute lexical centrality of each utterance to extract summaries [8].The speech summarization carries intrinsic difficulties due to the presence of recognition errors, spontaneous speech effect, and lack of segmentation.A general approach has been found very successful [9], in which each utterance in the document d, U = t 1 t 2 ...t i ...t n , represented as a sequence of terms t i , is given an importance score: where s(t i , d), l(t i ), c(t i ), g(t i ) are respectively some statistical measure (such as TF-IDF), linguistic measure (e.g., different part-of-speech tags are given different weights), confidence score and N-gram score for the term t i , and b(U ) is calculated from the grammatical structure of the utterance U , and λ 1 , λ 2 , λ 3 , λ 4 and λ 5 are weighting parameters.For each document, the utterances to be used in the summary are then selected based on this score. In recent work, we proposed a graphical structure to rescore I(U, d) above in (1), which can model the topical coherence between utterances using random walk within documents [3,5].Unlike lecture and news summarization, meeting recording is the multi-party interaction corpus so that the relations such as topic distribution within a single speaker or between speakers can be considered.Thus, this paper models intra-and inter-speaker topics together in the graph by partially sharing topics with the utterances from the same speaker or adjacent utterances to improve meeting summarization [10]. Proposed Approach We first preprocess the utterances in all meetings: word stemming and noise utterance filtering.Then we construct a graph to compute the importance of all utterances.We formulate the utterance selection problem as random walk on a directed graph, in which each utterance is a node and the edges between them are weighted by topical similarity.The basic idea is that an utterance similar to more important utterances should be more important [3].We then keep only the top N outgoing edges with the highest weights from each node, while consider incoming edges to each node for importance propagation in the graph.A simplified example for such a graph is in Figure 1, in which A i and B i are the sets of neighbors of the node U i connected respectively by outgoing and incoming edges. Parameters from Topic Model Probabilistic latent semantic analysis (PLSA) [11] has been widely used to analyze the semantics of documents based on a set of latent topics.Given a set of documents {d j , j = 1, 2, ..., J} and all terms {t i , i = 1, 2, ..., M } they include, PLSA uses a set of latent topic variables, {T k , k = 1, 2, ..., K}, to characterize the "term-document" co-occurrence relationships.The PLSA model can be optimized with EM algorithm by maximizing a likelihood function [11].We utilize two parameters from PLSA, latent topic significance (LTS) and latent topic entropy (LTE) [12].The parameters also can be computed by other topic model such as latent dirichlet allocation(LDA) [13] in similar way. Latent Topic Significance (LTS) for a given term t i with respect to a topic T k can be defined as where n(t i , d j ) is the occurrence count of term t i in a document d j .Thus, a higher LTS ti (T k ) indicates the term t i is more significant for the latent topic T k .Latent Topic Entropy (LTE), for a given term t i can be calculated from the topic distribution where the topic distribution P (T k | t i ) can be estimated from PLSA, LTE(t i ) is a measure of how the term t i is focused on a few topics, so a lower latent topic entropy implies the term carries more topical information. Statistical Measures of a Term Here in this work, the statistical measure of a term t i , s(t i , d) in ( 1) can be defined based on LTE(t i ) in (3) as where γ is a scaling factor such that 0 ≤ s(t i , d) ≤ 1, so the score s(t i , d) is inversely proportion to the latent topic entropy LTE(t i ).Some works [12] showed that this measure outperformed the very successful "significance score" [9] in speech summarization, and here we use LTE-based statistical measure, s(t i , d), as the baseline. Topical Similarity between Utterances Within a document d, we can first compute the probability that the topic T k is addressed by an utterance U i , Then an asymmetric topical similarity Sim(U i , U j ) for utterances U i to U j (with direction U i → U j ) can be defined by accumulating LTS t (T k ) in (2) weighted by P (T k | U i ) for all terms t in U j over all latent topics, where the idea is similar to generative probability in IR. We call it generative significance of U i given U j . Intra/Inter-Speaker Topic Modeling We additionally consider speaker information to model topics more accurately, where w intra is topic sharing weight for intra-speaker and w inter is for inter-speaker topic sharing, which are described as Section 2.4.1 and 2.4.2 respectively. Intra-Speaker Topic Sharing Weight Since we assume that the utterances from the same speaker in the dialogue usually focus on similar topics, which means if an utterance is important, the other utterances from the same speaker are more likely to be important in the dialogue [5].Then we can estimate Sim (U i , U j ) by setting w intra (U i , U j ) as S k is the set including all utterances from speaker k and δ is a weighting parameter for modeling the speaker relation.Here the topics from the same speaker can partially shared. Inter-Speaker Topic Sharing Weight Topic transition between adjacent utterances should be slow so that adjacent utterances should have similar topic distribution [14] even though they are not from the same speaker, and then we can increase Sim (U i , U j ) if U i and U j have closer position in the dialogue.Thus, we compute the weight for inter-speaker topic sharing as where l i is the position of the utterance U i in the dialogue, which means U i is the l i -th utterance in the dialogue.The boundary of utterance is decided by SmartNote [4].(10) is under an assumption that topic sharing is based on a normal distribution with a standard deviation σ.If |l i −l j | is smaller, which means U i and U j is closer to each other, and they may share their topics so that w inter (U i , U j ) is larger in (10).σ is a parameter of topic sharing range, which can be tuned by dev set.We normalize the similarity summed over the top N utterance U k with edges outgoing from U i , or the set A i , to produce the weight p(i, j) for the edge from U i to U j on the graph, Random Walk We use random walk [3,15] to integrate two types of scores over the graph obtained above.v(i) is the new score for node U i , which is the interpolation of two scores, the normalized initial importance, r(i), for node U i and the score contributed by all neighboring nodes U j of node U i weighted by p(j, i), where α is the interpolation weight, B i is the set of neighbors connected to node U i via incoming edges, and r(i) is normalized importance scores of utterance U i , I(U i , d) in ( 1). ( 12) can be iteratively solved with the approach very similar to that for the PageRank problem [16].Let v = [v(i), i = 1, 2, ..., L] T and r = [r(i), i = 1, 2, ..., L] T be the column vectors for v(i) and r(i) for all utterances in the document, where L is the total number of utterances in the document d and T represents transpose.(12) then has a vector form below, where P is L × L matrices of p(j, i), and e = [1, 1, ..., 1] T .Because i v(i) = 1 from ( 12), e T v = 1.It has been shown that the closed-form solution v of ( 13) is the dominant eigenvector of P [17], or the eigenvector corresponding to the largest absolute eigenvalue of P .The solution v(i) can then be obtained. Corpus The corpus used in this research is the sequences of natural meetings, which featured largely overlapping participant sets and topics of discussion.For each meeting, SmartNotes [4] was used to record both the audio from each participant as well as his notes.The meetings were transcribed both manually and using a speech recognizer; the word error rate is around 44%.In this paper we use 10 meetings held from April to June of 2006.On average each meeting had about 28 minutes of speech.Across these 10 meetings there were 6 unique participants; each meeting featured between 2 and 4 of these participants (average: 3.7).Total number of utterances is 9837 across 10 meetings.In this paper, we separate dev set (2 meetings) and test set (8 meetings).Dev set is used to tune the parameters such as α, σ, and δ. The reference summaries are given by the set of noteworthy utterances.Two annotators manually labelled the degree (three levels) of "noteworthiness" for each utterance, and we extract the utterances with the top level of "noteworthiness" to form the summary of each meeting.In following experiments, for each meeting, we extract top 30% number of terms as the summary. Evaluation Metrics Automated evaluation will utilize the standard DUC evaluation metric ROUGE [18] which represents recall over various n-grams statistics from a system-generated summary against a set of human generated peer summaries.F-measures for ROUGE-1 (unigram) and ROUGE-L (longest common subsequence) can be evaluated in exactly the same way, which are used in the following results. Results Table 1 shows the performance achieved from all proposed approaches.Row (a) is the baseline, which use LTE-based statistical measure to compute the importance of utterances I(U, d).Row (b) is the result after applying random walk with only topical similarity.Row (c) is the result additionally including intra-speaker topic modeling (w intra = 0); row (d) includes inter-speaker topic modeling (w inter = 0).Row (e) is the result performed by integrating two types of speaker information (with w intra = 0 and w inter = 0). Note that the performance of ASR is better than manual transcripts.Because a higher percentage of errors is on "unimportant" words, the recognition errors are harder to obtain high scores; then we can exclude the utterances with more errors to get better summarization results.Some recent works also show better performance for ASR than manual transcripts [3,5]. Graph-Based Approach We can see the performance after graph-based recomputation row (b) is significantly better than baseline row (a) for both ASR and manual transcripts.The improvement for ASR is more than for manual transcripts, Effectiveness of Speaker Information Modeling We find that modeling intra-speaker topics can improve the performance (row (b) and row (c)), which means speaker information is useful to model the topical similarity.The experiment shows intra-speaker modeling can help us include the important utterances for both ASR and manual transcripts.Then we find that only modeling inter-speaker topics cannot offer significant improvement for ASR transcripts (row (b) and row (d)) probably because sharing topics with adjacent utterances may decrease the centrality especially for the utterances with recognition errors.For manual transcripts, the improvement of inter-speaker topic model is not significant. Row (e) is the result from proposed approach, which integrates intra-speaker and inter-speaker topic modeling into a single graph, considering two types of relations together.For ASR transcripts, row (e) is better than row (c) and row (d), which means intra-speaker and interspeaker information cover different types of relations, and the relations can be additive.Note that only using inter-speaker topic modeling cannot improve the performance, but integrating with intra-speaker topic modeling can offer better results.The reason may be that intraspeaker topic modeling enhances centrality of important utterances, and additionally involving inter-speaker topic modeling slightly decreases centrality but successfully smoothing topic transition for adjacent utterances.For manual transcripts, row (e) also perform better by combing two types of speaker information, and the improvement is larger than ASR transcripts.Since without recognition errors topical similarity can model the relations accurately, integrating two types of speaker information can effectively improve the performance. In addition, Banerjee and Rudnicky [4] used supervised learning to detect noteworthy utterances in the same corpus, performing 43% (ASR) and 47% (manual) for ROUGE-1.Compared to it, our unsupervised approach performs better especially for ASR transcripts. Conclusions Extensive experiments and evaluation with ROUGE metrics showed that inter-and intra-speaker topics can be modeled together in one single graph and that random walk can combine the advantages from two types of speaker information for both ASR and manual transcripts, where we achieved more than 6% relative improvement. Figure 1 : Figure 1: A simplified example of the graph considered. Table 1 : Maximum relative improvement (RI) with respect to the baseline for all proposed approaches (%).
3,385.2
2012-09-01T00:00:00.000
[ "Computer Science" ]
Teachers’ Readiness towards the Integration of Information an d Communications Technology in Teaching and Learning of Engineering Graphics and Design in KwaZulu-Natal The integration of information and communications technology (ICT) into the education system has led to changes in the way teaching and learning are conducted. These changes have necessitated the need for teachers to have ICT skills that would help them integrate ICT into teaching and learning (T&L). Hence, this qualitative study was conducted to investigate the state of readiness of Engineering Graphics and Design (EGD) teachers in the integration of ICT in T&L in uMgungundlovu secondary schools. Convenience sampling was employed to select nine EGD teachers to partake in this study. Semi-structured interviews and classroom observations were used to collect data. Data gathered from interviews was subjected to thematic analysis, and data gathered from observations was reported descriptively. The findings of this study revealed that EGD teachers in uMgungundlovu District are ready to integrate ICT into the T&L of EGD, as they indicated that ICT integration in EGD lessons is essential. The study further revealed a shortage of ICT resources and a lack of ICT skills among teachers, which hinder the successful integration of ICT. The study recommends that the Department of Basic Education (DBE) provide teachers with ICT training so that those who are technically disadvantaged can be equipped with relevant ICT skills. The study further recommends that DBE give the schools an AutoCAD license, as it has been proven to be a useful ICT tool. INTRODUCTION Over the past two decades, ICT usage has been on the rise.Many factors have contributed to this rapid change in the advancement of ICT.The outbreak of COVID-19 and the world's migration to digital learning in the form of adapting to the Fourth Industrial Revolution (4IR) are some of the factors that contributed to ICT advancement.Consequently, the outbreak of COVID-19 has compelled educational institutions to come up with alternative ways to supplement the traditional teaching approach (Wyk et al., 2020).However, Mafenya (2022) argues that this sudden transition has undoubtedly caused an incredible amount of damage and disruption to our educational system.One of the ways that can be used to supplement the traditional teaching approach is the integration of ICT in the T&L of EGD.Hence, this study was conducted to investigate the level of readiness among EGD teachers to integrate ICT into T&L.EGD is a subject offered in the Further Education and Training (FET) phase, which is Grades 10-12 in South African secondary schools (DBE, 2011).The subject EGD is one of the technical-practical subjects that mainly focuses on line work, accuracy, and neatness.The concept of understanding line work rests on understanding the different types of lines that are used in EGD.This notion is echoed by Khoza (2018), who reported that there are 10 different lines that are used in EGD, and all these 10 different lines have different meanings altogether.Therefore, it is imperative for learners who are doing EGD to understand all these 10 different lines and the impact that these lines can have on drawings.Since EGD focuses on teaching principles that have both academic and technical applications (DBE, 2011), it needs a specific skill during its facilitation.These various skills range from spatial skills to the visualisation of abstract concepts to the understanding of different lines used in EGD (Khoza, 2013;Sotsaka, 2015).The integration of ICT requires teachers to remodel their pedagogical methods in T&L (DoE, 2004).This transformation, as expressed by the DoE, compels teachers to tweak their pedagogical methods to accommodate the integration of ICT in their T&L.However, change is hard, especially if you are changing something that you feel is not broken.As a result, many teachers are sceptical of this transformation because they feel that it is too great.In support, Msila (2015) revealed that teachers claimed that they have been teaching for years without using technology and have been producing great results in the process.The concern from teachers is quickly squashed by Maharaj- Sharma and Sharma (2017), who argue that ICT infusion in EGD encourages learners to seek knowledge themselves rather than waiting for teachers to be the sole providers of knowledge.The importance of integrating ICT into EGD lessons became evident when physical education classes were suspended due to the outbreak of COVID-19.Consequently, EGD teachers could not conduct classes virtually, citing that the EGD curriculum was not comfortable for virtual learning.This shows that EGD teachers are far behind when it comes to integrating ICT into their lessons. Integration of technology into education has been a prominent topic for quite some time.As a result, most developed countries, such as the USA and China, are a step ahead in the integration of ICT into T&L.According to Hismanoglu (2012), Turkey spent about 11.7 percent of its educational budget on ICT integration.However, in developing countries like South Africa, they are still far behind in the integration of ICT into T&L (Jhurree, 2005).Furthermore, Mashile (2017) posits that only 26 percent of teachers in South Africa are capable of integrating ICT into T&L.Besides that, it looks like EGD teachers are reluctant to use ICT, fearing that it is going to replace them, and this phenomenon might be shared by most of the EGD teachers around uMgungundlovu District as well.The integration of ICT into teaching has brought about changes in the style of T&L (Faloye et al., 2022).This change has compelled the need to investigate the readiness of EGD teachers for the integration of ICT in T&L. Research Questions This study used the following main research question: What is the readiness of EGD teachers for the integration of ICT in teaching and learning in uMgungundlovu secondary schools? The main research question was supported by the following sub-research questions: • What are the challenges faced by EGD teachers in the adoption of ICT in EGD classrooms? • What is the EGD teachers' technological knowledge in teaching and learning? • What are the different ways that EGD teachers can use ICT to integrate IT into teaching and learning? LITERATURE REVIEW ICT as a Tool to Improve Spatial Visualisation According to Khoza (2013), some concepts in EGD (for example, sectional drawing), require learners to imagine the part of the object that is removed to reveal hidden detail when nothing has been removed.This part of EGD requires learners to have a spatial-visualisation skill, which can be defined as an EGD cornerstone, together with knowledge of the different lines that are used in EGD.As attested by Khoza (2013) and Makgato (2016), most learners have poor spatial skills.Consequently, it has compelled the need to integrate ICT into the T&L of EGD, as abstract concepts can be best studied through technology.This is an indication that ICT integration is imperative in EGD because it can assist with the development of the most important skill in EGD, which is spatial visualisation.Sotsaka (2019) posits that spatial visualisation skills are essential because they develop the ability to transform abstract concepts into concrete ones.Furthermore, spatial visualisation is defined as the mental capability to execute certain graphical tasks (Rodriguez & Rodriguez-Velazquez, 2017).It is evident that to understand some concepts in EGD, one must have a spatial visualisation skill, as has been deemed necessary by the above authors, which can be best taught using technology.This is so because spatial visualisation is a solution to understanding abstract concepts in EGD. The Importance of ICT Training for Teachers As much as going virtual is a good idea given the current state we are living in, some teachers are not in a position to conduct lessons virtually as they are not trained to carry out such activities.This is echoed by Msila (2015), who says that teachers are firm believers that training is very important for them so that they integrate ICT into EGD lessons.During COVID-19 lockdown, teachers around uMgungundlovu could not conduct classes from home as they cited that they did not know how to conduct classes online.They are not confident using technology in front of learners.Another reason was that they also did not have ICT materials to help them carry out lessons from home.The issue of being less confident is alluded to by Howard and Mozejko (2015), who states that teachers are feeling less confident because they lack training.Adams (2020) sees this as an indication that without proper ICT training, teachers are doomed.Furthermore, Barbour (2014) cites that teachers should be subjected to development programmes and workshops where they will be equipped with ICT skills.Training teachers is a crucial component to integrating ICT into T&L.For teachers to be effective in implementing ICT in education, they require training.Many countries across the world have realised the importance of ICT; hence, they have started to provide ICT training to teachers in various forms and degrees (Jung, 2005).However, in South Africa, there are still teachers who claim that they have not been trained to use technology.Furthermore, Alazam et al. (2013) postulate that ICT can prove to be a very crucial component in a classroom if used wisely by a well-trained teacher.This simply means that teachers need to be trained so that they can integrate ICT into T&L effectively.Alazam et al. (2013) found that the level of teachers' ICT skills and usage was moderate in a study that examined the levels of ICT skills and ICT use in classrooms.In addition, teachers who possess ICT skills are found to be more useful than those who do not (Rastogi & Malhotra, 2013).This shows how important it is for teachers to be trained so that they can have relevant skills for integrating ICT.The importance of training teachers is a very important step in ensuring that teachers are in a better position to integrate ICT.This is supported by Tasir et al. (2012), who cite that there is an increase in the number of countries that are undertaking the programme of training teachers for ICT integration.Hence, the explosion of ICT all over the world has compelled individuals to have ICT skills, which are deemed paramount in the present time. Availability of Resources for Schools to Integrate ICT For EGD teachers to be able to integrate ICT into the T&L, they need to have access to ICT resources, and classrooms must be in a good state to support the integration of ICT.This notion is voiced by Mathevula and Uwizeyimana (2014), who argue that ICT equipment must first be available in schools before teachers can start integrating ICT into their T&L.In the schools where this study was conducted, teachers had access to basic ICT tools such as the projector, IWB, photocopiers, and computers.Others even had access to AutoCAD, which assists in the teaching of abstract concepts.Being able to teach abstract concepts with ease is a very important step toward improving learners' spatial visualisation, which has been proven to be a big problem.The above assertion is supported (Khoza, 2018;Khoza, 2013;Makgato & Khoza, 2016) that most learners are poor when it comes to spatial visualisation.On the contrary, this is only a dream in developing countries like South Africa, as most schools do not even have a computer or access to the internet (Mathevula & Uwizeyimana, 2014).Consequently, the lack of ICT resources has always been a cause for outcry for many teachers around the world, especially in developing countries like South Africa.The contrast is because the study by Mathevula was conducted in Limpopo, which is why the findings contradict the experience in uMgungundlovu District.The same sentiment was echoed by Mathevula and Uwizeyimana (2014), who reported that there is a lack of ICT resources for ICT integration in schools.Photocopiers, TVs, and laptop or desktop computers are the only ICT resources available to teachers in schools (Mathevula & Uwizeyimana, 2014). THEORETICAL FRAMEWORK This study adopted the Technological Pedagogical Content Knowledge (TPACK) framework by Koehler and Mishra (2009), as shown in Figure 1 below, to underpin the study.In the context of this study, only three components of TPACK were deemed relevant to the scope of this study, which are: Technological Knowledge (TK), Technological Pedagogical Knowledge (TPK) and Technological Content Knowledge (TCK).Figure 1.The TPACK Framework (Koehler & Mishra, 2009) According to Kurt (2018), TK refers to teachers' knowledge and ability to use a wide range of technologies to enhance T&L.It also has to do with teachers' understanding of using technology in their everyday teaching.In addition, Koehler and Mishra (2009) define TK as the "fluency of information technology," which translates to having an immense understanding and knowledge about integrating technology into T&L.In the context of this study, this component was used to investigate teachers' technological knowledge towards the use of technology in their EGD classes.TCK looks at the relationship between technology and content knowledge about the subject matter (Kurt 2018).It's about how technology and content affect each other. Teachers need to understand which technologies are best suited to address specific topics (Koehler & Mishra 2009).The TCK component is further defined as the knowledge of how technology can create new representations for specific content (Koehler & Mishra 2009).The TCK component is important to understand because, if better understood, an EGD teacher would develop appropriate technological tools to present tools and equipment in both the theoretical and practical constituencies.In this study, this component was used to investigate teachers' level of competency with the use of technology and how well they manage to showcase a newer teaching style when teaching EGD."TPK refers to the knowledge of how various technologies can be used in the classroom and the understanding that the use of technology can change the way teachers teach" (Koehler & Mishra 2009).TPK was verified through the lesson observations, where teachers were observed on how they use technology to teach EGD.This component was used to investigate teachers' understanding of using a variety of technologies at their disposal in a manner that would enhance T&L. METHODOLOGY This study employed a qualitative approach.According to Bhandari (2020), qualitative research involves the collection and analysis of non-numerical data.This study employed a qualitative approach mainly because of its ability to gather in-depth insights into a problem. Research Design and Paradigm This study employed a descriptive research design to obtain information to describe the phenomenon.The descriptive research design was used because of its strength in gathering an in-depth view of any phenomenon that is under study (Sumeracki, 2018).This aspect of getting an in-depth view of the matter at hand was evident as the researcher needed to establish the readiness of EGD teachers, and the best way to do that was to get an in-depth view on their readiness. A study done by Kivunja and Kuyini (2017) articulates three research paradigms that are used in educational research: positivism, interpretivism, and critical theory.Consequently, this study embraced the interpretivism paradigm.The interpretivist paradigm was necessary for this qualitative study because the aim of the study was to understand the phenomenon of EGD teachers' readiness to integrate ICT in their classrooms through face-to-face interviews and classroom observation.Rehman and Alharthi (2016) alluded to interpretivism research relying mostly on verbal data; hence, this study used semi-structured interviews to exploit the advantage of interpretivism. Population and Sampling The target population for this study was Grade 10 and 11 teachers from nine selected schools.These teachers were selected because they were all teaching EGD in schools around the uMgungundlovu district.This study focused on schools under the Umsunduzi Circuit situated in Pietermaritzburg under the KwaZulu-Natal province in South Africa, which has 11 secondary schools that offer EGD.Sharma (2017) asserts that researchers usually use sampling because it is impossible to test every single person in a chosen population, so a subset is required.In addition, Taherdoost (2016) posits that sampling is a technique used to take a subset of people from a larger population.In this study, non-probability sampling was employed.According to McCombes (2019), non-probability sampling is when individuals are selected without following a random criterion, and not every member of the population has a chance of being selected for the study.As a result, this study used convenience sampling methods to select nine Grades 10 and 11 EGD teachers that were available.Table 1 below shows the participants' bibliographies.Taherdoost (2016) posits that convenience sampling involves selecting participants because they are readily and easily available.This method was used because it is very cheap and helps overcome many limitations that a researcher can stumble upon.The above table shows the bibliography of teachers who participated in the study.Data Collection and Analysis Faloye et al. (2022) allude to the fact that there are three commonly used data collection techniques in research: interviews, observations, and questionnaires.A qualitative study normally uses interviews, observations, focus groups, and case studies (Bhandari, 2020;Bhat, 2020).Consequently, this study used semi-structured interviews and classroom observations to gather data.In this study, interviews were used to gain a deeper insight into the matter at hand through one-on-one sessions with the participants.Semi-structured interviews were used because of their ability to get first-hand information from the participants.According to Sotsaka (2015), classroom observation involves being present in the classroom and observing what is happening.This would give a researcher an opportunity to gather some things that he could not get from the interviews.Classroom observations were used to get a sense of the reality of the teaching methods teachers use to integrate ICT into their T&L.Teachers who participated in the classroom observation are those who were interviewed.Only five teachers gave consent to be observed; consequently, only five teachers were observed.The classroom observation schedule was adapted from the TPACK framework by Koehler and Mishra (2009).The observations were 60 minutes each, as most lessons in uMgungundlovu District are 60 minutes long.The researcher observed five lessons from those teachers who consented from five different schools.The observation schedule was adapted from the TPACK framework.The observations took place during the lesson, but there was no interaction with the teacher or learners as the researcher conducted a non-participant observation.The researcher was assigned a place in the back of the classroom to sit and observe without making any comments. Name of Teachers Gender Majors Experience Data collected through interviews was then subjected to a process called transcription after each interview and typed, showing respondents' quotes as they were responding to the questions asked.The data was then coded, analysed, and discussed thematically.The presentation and analysis of the data took the form of narratives and detailed descriptions with quotes from the respondents to capture their actual views.Verbatim quotations were used in thematic discussions of interview data to support the results.For the classroom observations, field notes were made and reported descriptively.The observation data was then analysed according to the observation schedule. Ethical Considerations The principals of all selected schools were asked in writing for permission to use their schools as the study sites.To ensure the integrity of the study, the researcher sent consent forms to respondents, ensuring that participation in the study was voluntary and that if respondents felt uncomfortable during the study, they could stop the study at any time without negative consequences.Confidentiality in the information conveyed by participants was maintained, and the data collected was only used for the purpose of this study.Pseudonyms were used to ensure the anonymity of schools and teachers.Schools were referred to as schools A-I, and teachers were referred to as teachers A-I.The data collected is kept in safe storage. FINDINGS Interview findings The interview questions were intended to answer the main question: What is the state of EGD teachers' readiness for the integration of ICT in T&L in uMgungundlovu secondary schools?And the sub-research questions were as follows: (1) What are the challenges faced by EGD teachers in the adoption of ICT in EGD classrooms?(2) What is the EGD teachers' technological knowledge in T&L? Themes were then created from the teachers' responses for better discussion.Below are the responses of the participants based on the questions asked.Q1: During your undergrad studies (at the university), did you do any module(s) related to using technology in teaching and learning?If yes, what is it that was mainly taught?Below is how they responded: From the responses, only one theme emerged.Theme: Teachers were or were not exposed to technology at the university level.THEME 1: Teacher's Exposure to Technology at University Level Many researchers have found that teachers' ICT background from university does influence the willingness of integrating ICT when they turn professional.Put simply, it means that if teachers were taught how to use ICT in the classroom while they were trained it has a great effect when they turn professional.This assertion is corroborated by Quaye et al. (2015), who postulate that "there is a positively high impact of ICT on T&L in tertiary institutions in the sense that, broadband is a major factor in increasing collaboration between teachers".Below are how the interviewed teachers responded. Teacher A said: "While I was still in university, we were taught about how to integrate technology when teaching EGD.We were mainly taught in using CAD and PowerPoint to teach EGD." Teacher I from School I had this to say: "Yes, we did Auto AutoCAD which was taught mainly on our final year of study even during the course of other years we had AutoCAD classes."Based on the above statements, it shows that teachers got a background in ICT in university.This means that the chances of integrating ICT into their T&L are great.Quaye et al. (2015) indicated that being exposed to ICT in higher education institutions does influence their use it when they turn professional.In the same vein, Matongo (2022) revealed that teachers are not integrating ICT because they are not trained in colleges where they did their teaching qualifications. Q2: What technologies are available to use in this school for the purpose of teaching and learning EGD? Below is how they responded: From the teacher's responses, only one theme emerged.Theme: Schools have resources available to integrate ICT. THEME 2: Schools have Basic Resources Available The researcher wanted to identify the availability of resources in schools for the purpose of ICT integration in EGD teaching.It is no secret that teachers who want to successfully integrate ICT into T&L must have the appropriate resources.In response to the question of availability, teachers responded positively.Teacher I, from School I, said: "In our school we have a centre for EGD that has 20 computers which are installed with AutoCAD, interactive white board that is used for a projector background as well as a photocopying machine" In the same vein, Teacher A from School A said: "There are not much of technologies we have in our school; we only have access to computers with access to the internet and a photocopier." Based on the above responses, it is clear that teachers in schools have some technologies at their disposal.They have access to the basic technologies that are sufficient to kick-start the integration of ICT into the T&L of EGD.However, this is contrary to the findings of this study.According to Mathevula and Uwizeyimana (2014), a lack of resources in schools has been proven to be a hindrance to the success of ICT integration.This was further asserted by Alharbi (2021), who revealed that ICT resources have been found to be inadequate in schools, which makes it hard to integrate ICT effectively. Q3: How competent are you in using the technologies that are available in this school for the purpose of teaching EGD? Below is how they responded to the question: From the teachers' responses, one theme emerged.Theme: Teachers know how to use ICT resources. THEME 3: Teachers know How to Use ICT Resources When teachers were asked about their competence in using the technologies available at their school, most showed a high level of competence.Below are some of the responses from the teachers: Teacher H said: "In terms of rating, using AutoCAD to draw or prepare worksheets, I'm comfortable.I can do almost everything.So, I can say I'm good with operating these technologies and with AutoCAD I'm home and dry." Teacher D from School D had the following to say about his level of competency in using technology: "For the computer I will give myself a 7 out of 10 in terms of creating a question.For a printer that would be 10, A photocopier 10, a projector 10.I am good with the Whiteboard I do have the whiteboard in fact in class we've got the whiteboard and a chalkboard." The above responses show that EGD teachers are all good or well-equipped at using the technologies that are available to them.If they had all the required technologies, surely they would have used them effectively and efficiently in the process of T&L.But from the above statements, it shows that they know how to use the ICT resources that are at their disposal.A study that examined the level of ICT skills in teachers, conducted by Alazam et al. (2013), revealed that teachers' levels of use of ICT were moderate.which is contrary to the findings from the statements of the teachers above.Furthermore, Alazam et al. (2013) postulate that ICT integration can prove to be a very crucial tool if technologies are used wisely by teachers.This is exactly what was shown by the teachers when asked about their level of competency.Q4: What is your view of the concept of using technology in your teaching and learning?From the responses, one theme emerged.Theme: Technology is essential in the T&L of EGD. THEME 4: Technology is Essential in T&L of EGD All nine teachers responded positively to the idea of using technology when teaching.They highlighted that teaching and technology can never be divorced; others said technology changes with time, so this should also be the case in T&L.Below is how some teachers responded: When asked for her view about the integration of technology into T&L, Teacher I from School I had this to say: "Teaching needs technology, without technology teaching is impaired, because technology moves with time.Technology is very important in T&L of EGD as it makes teaching very easy.And if we are training or raising a generation that must be competitive globally, they need ICT they need technology we just cannot divorce the two (technology and education)."Teacher H from School H had this to say: "One thing for sure you cannot run away from is technology because the world is evolving fast in terms of technology.And we are moving far away from the traditional way of things.Things have evolved so we also need to adapt to change.So, I like to believe, and I believe that technology needs to be incorporated in learning and teaching processes because without it you would not survive.Technology is an integral part of T&L." This is an indication that EGD teachers understand the importance of incorporating technology into their lessons.These assertions are further confirmed by Erişti et al. (2012), who reported that teachers are willing to integrate technology into their lessons.Erişti et al. (2012) further mentioned that teachers saw the integration of technology into T&L as a good thing, so they reacted willingly.In addition to that, Mustafina (2016) reported that teachers had a positive attitude toward integrating ICT into T&L. Q5: What are the challenges that you have experienced in using technology?Below is how they responded: From the teachers' response, only one theme emerged.Theme: Lack of availability of ICT resources in schools. THEME 5: Lack of Availability of ICT Resources in Schools Most teachers indicated that the challenge they faced was the lack of ICT resources in schools.They claimed that they understood the importance of integrating technology, but they did not have the resources to use it for the purpose of T&L.This sentiment is shared by Munje and Jita (2020), as the findings of their study revealed that schools do not have adequate resources to integrate ICT.This was further echoed by Ghavifekr et al. (2016), who said that the greatest challenge in schools is the insufficient provision of computer resources, which prevents the integration of technology in the classroom.This is evident in the teachers' responses below. Teacher F had the following to say: "The problem is that there is a shortage of resources because our school is very big."Alharbi (2021) states that there are a host of challenges that teachers come across every day when trying to integrate ICT into T&L.One of those challenges is the lack of provision for educational software such as AutoCAD, which plays a huge role in improving learners' spatial visualisation. Teacher G, when asked, said: "In this school we do not have access to AutoCAD.As a result, learners are failing to understand some chapters better as AutoCAD simplify abstract concept." On the contrary, some teachers mentioned that the department did provide the school with resources, but there were other challenges that hindered them.This is echoed by one of the participants in the Munje and Jita (2020) study, who said, "The DBE had provided the school with computers, but due to theft, these were no longer available."This is an indication that DBE is making provisions so that teachers can integrate ICT into T&L.Teachers and learners encountered many challenges when trying to use video as a tool to integrate ICT.According to Li and Lalani (2020), those challenges included but were not limited to slow internet connections and electricity outages, to mention a few.These statements from the teachers indicate that in schools there is a lack of availability of resources. Q6: What do you think can be done to assist teachers who are technologically disadvantaged?Below is how they responded: From the teachers' responses, only one theme emerged.Theme: Department of Education should conduct workshops. THEME 6: Department of Education should Conduct Workshops According to the teachers' responses, all nine teachers concur about how they think technologically disadvantaged teachers can be assisted.All the teachers said the DoE should take the initiative in training EGD teachers so that they would be in a better position to integrate ICT into T&L.They all believe that training teachers in ICT through workshops can be very helpful, as illustrated by their responses. Teacher B: "I think there must be workshops.The Department of Education and the subject advisors must assist teachers through workshops on how to integrate technology into teaching." In the same vein, Teacher C said: "The Department of Education could assist in terms conducting workshops to get the teachers to be taught on how to use these technologies." The above responses from the teachers share the same sentiment about the importance of workshops to equip teachers.The sentiments expressed by these teachers echo that of Msila (2015) who argued that the district should train teachers so that they would be ICT efficient.This view is further attested to by Barbour (2014), who emphasises that teachers should use teacher development programmes to receive proper training so that they can integrate ICT.Tasir et al. (2012) posit that there has been an increase in the number of countries that are now undertaking a programme of skills development for teachers in ICT integration.Through all the questions that were posed and responses that were given by the teachers, one can conclude that EGD teachers from uMgungundlovu district are ready to integrate ICT into T&L, although there are a few challenges that pose a threat. Observation Findings Observations were conducted to assist in responding to sub-research questions RQ 1 and RQ 2, which are: (1) What are the challenges faced by EGD teachers in the adoption of ICT in EGD classrooms?(2) What is the EGD teachers' technological knowledge in T&L? Below are the observations from different teachers with respect to the components of TPACK, which are TK, TCK, and TPK. DISCUSSIONS Discussion based on interviews The researcher established that teachers have ICT backgrounds from the university, which assists them in understanding the concept of ICT.Having an ICT background from the university influences the way teachers integrate ICT into their T&L.Quaye et al. (2015) state that being exposed to technology while still in university does influence teachers to use it when turning professional.Consequently, EGD teachers from uMgungundlovu District are integrating ICT into T&L, as most of them were exposed to technology during their training at the university.The findings further established that teachers have access to basic ICT tools like IWB, photocopiers, and computers, which is start the integration of ICT into T&L; however, some teachers indicated a lack of educational software like AutoCAD because the license is very expensive and there is a shortage of computers that can support AutoCAD.On the other hand, some teachers indicated that they do not have ICT tools because they were stolen after the DoE provided them.This concurs with findings in a study by Alharbi (2021), which highlights lack of infrastructure as one of the challenges that hinder teachers from integrating ICT.The study further that EGD teachers are very competent using technologies that are available them at school, such as computers, IWB, projectors, and photocopiers.This is discussed by Karsenti (2016), who cites that most teachers use IWB in their EGD classrooms.The only challenge they had was the use of AutoCAD, which most teachers do not have access to.Teachers consistently remarked that the DoE needs to step up and conduct workshops so that they can integrate technology.From the interviews, the study established that teachers understand the importance of using technology in the T&L of EGD.They outlined that since the world is migrating to 4IR, it is impossible to divorce education and technology, as they should go hand in hand.The nature of the subject of EGD warrants the use of ICT tools so that other concepts can be best taught by teachers.The study further established that even though teachers understand the importance of integrating ICT into EGD lessons, there are still problems that hamper the process, such as a lack of internet connection, power outages (load shedding), and a lack of educational software (Mathevula & Uwizeyimana, 2014).The study further established that for all EGD teachers to be ICT-equipped, they need to be trained, and the DoE must conduct workshops that will equip teachers with relevant ICT skills.According to Alazam et al. (2013) and Matongo (2022), for teachers to be able to integrate ICT into EGD lessons, they must be trained.Being trained will ensure that teachers are able to integrate ICT effectively, which will result in learners understanding the abstract concepts taught in EGD as these concepts are better studied using technology.Discussion Based on Observations Teachers that were observed displayed good knowledge of using the technologies that were at their disposal.They showed a clear understanding of all the technologies they used.Technologies used by teachers ranged from printers and whiteboards to laptops, computers, and projectors.The availability of these resources in schools was attested to by Mathevula and Uwizeyimana (2014), who revealed that TVs, photocopiers, laptops, and computers are some of the technologies that teachers have access to for the purpose of integrating ICT.All these technologies used by teachers are the devices mentioned by Huggins and Izushi (2002) when defining ICT.Huggins and Izushi (2002) defined ICT as technologies that ranged from computers to interactive whiteboards, projectors, and access to the internet.However, the researcher observed that there is a shortage of overhead projectors and that accessing AutoCAD is a serious challenge.In contrast to findings in studies done by Bakadam and Asiri (2012) and Karsenti (2016), which that most EGD teachers have access to the IWB and photocopiers as the basic ICT tools.This is again one of the requirements that an EGD class should have.DBE (2011) states that it is a requirement that an EGD classroom have an overhead projector, a whiteboard, and access to AutoCAD.What the researcher observed is that even though there is a shortage of other ICT tools, teachers were able use what they had at their disposal to integrate ICT into EGD classrooms.Teachers use computers to print worksheets so that learners can draw.The researcher further observed that EGD teachers used the IWB well, which is something that was alluded to by Karsenti (2016).The researcher observed that most teachers had access to ICT resources, and they used them effectively.The researcher also observed that the second most used technology after IWB was the photocopier, which was the device they used to print or make copies of the worksheets that learners were using.Teachers exhibited high levels of competency in using an IWB and a photocopier, among other technologies at their disposal.From the observations, the researcher observed that EGD teachers were able to use technologies for the purpose they were intended to, which was in line with the content that was taught.This speaks highly to the aspects of TPACK (TK, TPK, and TCK) that the researcher was observing.All these components were observed to be strong in teachers, which indicates that EGD teachers in uMgungundlovu District are ready to integrate technology into the T&L of EGD. Limitations of the Study The limitations of the study were that the researcher was only allowed to observe Grades 10 and 11, but not Grade 12. Another limitation of the study was that the researcher was only allowed to interview and observe one teacher per school.The researcher is of the opinion that if access to more than one teacher per school was provided, more data might have been produced and the findings might have been improved. CONCLUSSION The main objective of this study was to investigate the level of readiness of EGD teachers for the integration of ICT in T&L in uMgungundlovu secondary schools.From the results of this study, it is evident that EGD teachers in uMgungundlovu district are indeed ready to integrate ICT into their teaching.The findings of this study showed that EGD teachers understand the importance of using technology when teaching because of the nature of the subject.The findings further revealed that if technology is used in EGD lessons, abstract concepts can be manipulated to the advantage of learners.To achieve that, AutoCAD and other technologies must be implemented as prescribed by the DBE.The findings also indicate that EGD teachers are integrating ICT with the little ICT tools that they have.All schools observed had a basic form of an ICT tool; basic tools range from an IWB to a photocopying machine.From the observations, it was revealed that EGD teachers are using the IWB exceptionally well, and they all have access to a photocopy machine, which is used to produce documents, which indicated that their technological knowledge is exceptional.The findings of the study imply that teachers need ICT training so that they can integrate ICT into lessons.The findings revealed the same thing as teachers' indications that DBE needs to conduct workshops so that technologically disadvantaged teachers can be capacitated.The findings also revealed that people lack the skills to use AutoCAD because schools do not have access to it.As a result, teachers were unable to use AutoCAD, which is a requirement from DBE for any EGD class.The findings also revealed that workshops can also be used to equip teachers with relevant skills to operate AutoCAD.According to the findings, therefore, the objectives of the study have been met, as the findings reveal that EGD teachers from uMgungundlovu District are ready to integrate ICT into T&L. Recommendations Based on the findings and discussions of the study, the study recommends that DBE in KwaZulu-Natal should provide ICT resources to schools so that they can integrate ICT into their EGD lessons.The study also recommends that the DBE in KwaZulu-Natal provide adequate security so that resources can never be stolen again.The study further recommends that DBE intervene and assist those teachers who are technologically disadvantaged so that all EGD teachers can be on the same level in terms of using ICT resources.This can be done by conducting workshops and training that will equip EGD teachers with relevant ICT skills.The study recommends that DBE provide all schools that offer EGD with AutoCAD licenses.
9,129.4
2023-09-18T00:00:00.000
[ "Engineering", "Education", "Computer Science" ]
Experimental exposure to 3-monochloropropane-1 , 2-diol from the pre-puberty causes damage in sperm production and motility in adulthood 3-Monochloropropane-1,2-diol (3-MCPD) is a food contaminant that can be formed during the thermic processing of various foodstuffs. Studies of reproductive toxicology of 3-MCPD are mainly concentrated in the evaluation of possible insults caused by exposure of adult animals. However, the prepuberty might be a period of different susceptibility to chemicals. The aim of this study was to evaluate the effects on reproductive endpoints of the 3-MCPD-exposure prepubertal male rats. Wistar male rats were assigned to 4 groups: control and exposed to 2.5; 5 or 10 mg kg day of 3-MCPD for 30 days by gavage. Testis and epididymis were used for sperm counts and histology analysis. Sertoli cell number and dynamic of the spermatogenesis were evaluated. Sperm were collected from the vas deferens for evaluation of the sperm motility and morphology. Number of sperm with progressive movement, number of Sertoli cells and germ cells and relative daily sperm production were decreased in the groups exposed to 5 and 10 mg kg day of 3-MCPD. Sperm morphology, testicular and epididymal histology were comparable among groups. Results show that 3-MCPD-exposure of rats from prepuberty might cause alterations in spermatogenesis and sperm maturation, similarly to exposure in adulthood. The exact mechanism of 3-MCPD formation is still unknown (Rahn & Yaylayan, 2011).However, the compound may be formed during thermic processing, as a reaction product of triacylglycerols, phospholipids or glycerol and hydrochloric acid in fat-based or fat-containing foods, and refinement process in vegetable fats and oils (Jędrkiewicz, Kupska, Głowacz, Gromadzka, & Namieśnik, 2016).Moreover, domestic processing (e.g.grilling and toasting) and the migration from coating materials treated with epichlorohydrin also can increase the 3-MCPD levels in foods (Crews, Brereton, & Davies, 2001, Hamlet & Sadd, 2009). The maximum tolerable daily intake of the 3-MCPD is 2 μg kg -1 bw day -1 .According its food occurrence and consumption, the exposure calculated of the compound for general population might range 0.02 -0.7 μg kg -1 bw day -1 .However, high consumers might ingest of 0.06 to 2.3 μg kg -1 bw day -1 of 3-MCPD (Food and Agriculture Organization [FAO] & World Health Organization [WHO], 2007). The presence of this contaminant in the diet raises concerns for the risks it may pose to health, due potential carcinogenic, effect possibly mediated by mechanisms involving either hormonal disturbances or cytotoxicity (Robjohns, Marshall, Fellows, & Kowalczyk, 2003).Moreover, Cho et al. (2008) indicate kidneys, testis and ovary as potential targets of the 3-MCPD toxicity. 3-MCPD has been associated to reproductive impact in adult rats (Sun et al., 2013, Sawada et al., 2015), but the effects of the exposure from the prepubertal period are unknown.Spermatogenesis and steroidogenesis are not yet fully established during prepuberty, which in rat occurs at postnatal day 36-55 or 60 (Clegg, 1960, Ojeda, Andrews, Advis, & White, 1980).This phase of the postnatal sexual development has been related as a critical period and more susceptible to reproductive impairment caused by chemical agents (Johnson, Welsh, & Wilker, 1997).That is why the possible reproductive insults caused by exposure in prepuberty should be particularly investigated (Perobelli, 2014). Children and adolescents may be more exposed to the 3-MCPD than the adults (Li, Nie, Zhou, & Xie, 2015).Thus, the aim of this study was to evaluate the effects on reproductive endpoints of the 3-MCPD-exposure from the prepuberty.For this, testicular and epididymal histology and sperm parameters were analyzed, using the reproductive tract of Wistar rats as the experimental model. Animals Male (45 days old, n = 40) Wistar rats supplied by the Central Vivarium of Unoeste -Universidade do Oeste Paulista -were housed in the Vivarium of Experimentation at the Unoeste.During the experiment, animals were allocated into polypropylene cages (43 × 30 × 15 cm) with laboratory-grade pine shavings as bedding.Rats were maintained under controlled temperature (23 ± 1ºC) and lighting conditions (12L, 12D photoperiod).Rat chow and filtered tap water were provided ad libitum.The experimental protocol followed the Ethical Principles in Animal Research of the Brazilian College of Animal Experimentation and was approved by the Ethics Committee for Use of Animals at the Unoeste (Protocol # 2055-CEUA). Experimental design and treatment The study was conducted according to the experimental design described below.Rats were randomly assigned into four experimental groups.Control animals (n = 10) received vehicle (saline solution 0.9%), for 30 days by gavage.Rats from the 3-MCPD groups (n = 10 group -1 ) received 2.5 (treated A -TA); 5 (treated B -TB) or 10 mg kg -1 bw day -1 (treated C -TC) of 3-MCPD (Sigma Chemical CO, St. Louis, U.S.A.) diluted in saline solution 0.9%, for 30 days by gavage.Rats were weighed three times for week and had your daily intake estimated ration (in g) and water (in milliliter).In addition, clinical signs of toxicity were observed. Organs collection At the end of treatment rats from each experimental group were euthanized with sodium thiopental (100 mg kg -1 ), by intraperitoneal administration. The right testis, epididymis and vas deferens, ventral prostate, seminal vesicle (without the coagulating gland and full of secretion), liver, kidneys and pituitary were removed and their weights (absolute and relative to body weights) were determined. Sperm analysis Immediately after euthanasia, the left vas deferens was collected and spermatozoa were obtained with the aid of a syringe and needle, through internal rinsing with 1.0 mL of PBS solution at 34ºC.Warmed Newbauer chamber was loaded with a small aliquot of sperm solution. of a syringe and needle, sperm were removed from the right vas deferens through internal rinsing with 1.0 mL of buffered formalin.Sperm morphology analysis was performed according to Seed et al. (1996) and abnormalities were classified according to Filler (1993).Right testes were decapsulated and caput/corpus and cauda segments of right epididymis were separated. Homogenization-resistant testicular spermatids (stage 19 of spermiogenesis) and sperm in the caput/corpus epididymis and cauda epididymis were assessed as described previously by Robb, Amann, and Killian (1978), with adaptations of Fernandes et al. (2007).To calculate daily sperm production (DSP) the number of spermatids was divided by 6.1 (number of days that these cells are present in the seminiferous epithelium).The sperm transit time through the epididymis was determined by dividing the number of sperm in each portion by DSP. Histological procedures and quantitative analysis of spermatogenesis The left testis and epididymis were collected and fixed in buffered formalin (10%) for 24 hours.After this period, these organs was sectioned and returned to the buffered formalin for additional 24 hours.The pieces were embedded in paraffin wax and sectioned at 5 μm.The sections used for histological evaluation were stained with hematoxylin and eosin (HE), Periodic acid-Schiff (PAS) and Masson's trichrome and examined by light microscopy. One hundred random tubular sections per animal (n = 10 animals group -1 ) in three nonconsecutive testis cross-sections were classified into four categories, according to the type of germ cell present.The groups of stages I-VI (one spermatids generation), VII-VIII (spermatozoa), IX-XIII (two generations of spermatids) and XIV (secondary spermatocyte) of the seminiferous epithelium cycle (Leblond & Clemont, 1952) were identified under a light microscope (Leica DMLS) at x 200 magnification. Nucleus of Sertoli cells and germ cells (spermatogonia type A, pachytene primary spermatocyte and round spermatid) were counted in 20 seminiferous tubules per rat (n = 10 animals group -1 ) at stage VII of spermatogenesis, under a light microscope (Leica DMLS), at x 400 magnification.The number was corrected for nucleolus/nucleus size and section thickness, according to Abercrombie (1946). Statistical analysis Statistical analyses were conducted by the ANOVA, with a posteriori Tukey test or nonparametric Kruskall-Wallis test, with a posteriori Dunn test, according to the characteristics of each variable.The results were expressed as mean ± standard deviation (SD) or median and quartile 1 and 3 (Q1/Q3).Differences were considered significant when p < 0.05. Results and discussion There was no statistical difference (p > 0.05) in the body weight among experimental groups (Table 1) during the treatment period.The feed intake and water consumption were affected by the 3-MCPD-exposure only at the end of the treatment period (data not shown).On the 27 and 30 th of exposure, there was an increase (p < 0.05) of the feed intake in the TA group compared to the control group.It was also observed difference (p < 0.05) between the TA and TB groups on the 27 th day of treatment.On the 30 th day of treatment, there was a lower (p < 0.05) water consumption in the TC group compared to the TA group, but not in relation to other experimental groups. The absolute and relative weights of reproductive organs, pituitary, liver and kidneys are shown in Table 1.Significant augmentation (p < 0.01) in the absolute weight of the kidneys in rats from TC group in comparison to the control group was observed.However, relative weight of this organ was similar among experimental groups.The weight of the vas deferens was increased (p < 0.05) in TA group, when compare to control group. Sperm number per gram of testis and daily sperm production per gram of testis were significantly reduced (p < 0.05) in TC and TB groups, when compared to control and TA groups, respectively (Table 2).Nevertheless, sperm number in the caput/corpus epididymis, epididymal transit time and sperm morphology were similar among experimental groups (Table 2).The number of sperm with progressive movement was significantly decreased (p < 0.05) so that, consequently, the percentage of immotile sperm was increased (p < 0.05) in the TB and TC groups in relation to control group (Figure 1). Table 2. Sperm parameters of rats from control and treated with 2.5 (TA), 5 (TB) and 10 mg kg -1 day -1 (TC) of 3-MCPD groups.The stages of spermatogenesis were not significantly affected by different doses of 3-MCPD exposure (Table 3).On the other hand, the number of Sertoli cells and germ cells was lower in the TB and TC groups, when compared to control and TA groups (Table 3). 3-MCPD is a food contaminant found in various foods that are part of daily diet.Experimental studies with adult rodents have shown toxic effects of this compound, especially on the urinary and reproductive systems (Cho et al., 2008, Bakhiya, Abraham, Gürtler, Appel, & Lampen, 2011, Kim et al., 2014).However, the reproductive studies caused by exposure from prepubertal phase, which could have a relevant exposure, are scarce. Although the levels of exposure to food contaminated by 3-MCPD is not high for the general population, the growing consumption of some foods, such as frozen meals, which have higher levels of the contaminant, raise a concern (Arisseto, Vicente, Furlani, & Toledo, 2013).Furthermore, the presence of this compound in infant formulas and breast milk (Zelinková et al., 2008, Wöhrlin, Fry, Lahrssen-Wiederholt, & Preiß-Weigert, 2015) also indicates the need for further toxicological research in this area. The assessment of body weight and nutritional status is extremely important to obtain information on the overall health of the animals exposed to chemicals (Clegg, Perreault, & Klinefelter, 2001).The reproduction is often one of the first functions to be affected when there is inadequate nutrition and loss body weight (Krasnow & Steiner, 2006).Thus, the analysis of these parameters is essential to interpretation of the effects of chemicals on the reproductive system.In the current study, final body weight of rats from different experimental groups was similar; corroborating study of Onami et al. (2014). of rats from con an (Q1 -Q3).Kr phs of testis secti kg -1 day -1 -B) gr ) and 3-MCPD ( sence of patholo ke also were ed by Yamad o (1995).How rease in the bod after nine (fe osure to 400 p 9 mg kg -1 ).ght of the anim ductive impact Sperm parameters were impaired by 3-MCPD exposure.Sperm number per gram of testis and daily sperm production per gram of testis were significantly reduced in the rats from groups exposed to higher doses of food contaminant (5 and 10 mg kg -1 bw day -1 ).These data corroborate with the reduction of the germ cells number observed.However, Kim et al. (2012) related that the exposure of adult rats (3, 10, and 30 mg kg -1 bw day -1 3-MCPD for 7 days) no caused changes in the number of sperm head in the testis. The impact on relative sperm production was not changed the cytoarchitecture of seminiferous tubules and the frequencies of different stages of the spermatogenic process.This absence of histological injury also was observed in study of Kwack et al. (2004).These researchers found impairment on fertility and pregnancy outcome, without histopathological changes in testis and epididymis, after paternal exposure to 5 mg kg -1 bw of 3-MCPD for four weeks.Despite this, Cho et al. (2008) observed degeneration of the seminiferous epithelium in adult mice after subchronic exposure to higher doses of 36.97 and 76.79 mg kg -1 of 3-MCPD. In the present study, in spite the maintenance of the general morphological characteristics of the testes and absence of evident histological alterations, 3-MCPD exposure affected the number of germ cells per seminiferous tubules.In adult rats, Madhu, Sarkar, Biswas, Behera, and Patra (2011) observed reduction in seminiferous tubular areas, secondary spermatocytes and nuclear diameter of Leydig and Sertoli cells after 45 days of exposure to 1 mg kg -1 bw day -1 of 3-MCPD.Moreover, spermatogonia, primary spermatocytes and Sertoli cells increased at 15 and 45 days of exposure.On the other hand, in the current study the number of Sertoli cells was decreased in seminiferous tubules of rats exposed to 5 and 10 mg kg -1 bw day -1 of the food contaminant when compared to control and TA groups.This result suggests that there was cell death and not a reduction of its proliferation, since the exposure began after the end of Sertoli cell proliferation period (Orth, Gunsalos, & Lampert, 1988).The decrease in Sertoli cell number may have triggered the reduction observed in counts of the spermatogonias type A, pachytene primary spermatocytes and round spermatids.This fact might be related to control on spermatogenesis magnitude exercised by Sertoli cell (Hess & França, 2005). In the current study, epididymis histology was no impaired by 3-MCPD, corroborating study of Kwack et al. (2004).On the other hand, Kim et al. (2012) found spermatic granuloma, cell debris, epithelial cell vacuolization and oligospermia in proximal caput epididymis of adult rats after treatment with 10 mg kg-1 bw day-1 of 3-MCPD for seven days. Evaluation of sperm morphology by light microscopy indicated no difference among the experimental groups, as related by Kim et al. (2012).Nevertheless, electron microscopy analysis realized by Madhu et al. (2011), revealed sperm abnormalities, including deglutination of the acrosomal part, loss of head capsules, and fragmentation of tail fibrils. 3-MCPD did not affect the transit time through the epididymis; however, sperm motility was reduced, suggesting changes in the sperm maturation process.Several authors also observed impairment in sperm motility of adult rats after 3-MCPD exposure at different doses (Kwack et al., 2004, Kim et al., 2012).Decrease in H+-ATPase expression in the cauda epididymis (Kwack et al., 2004) and enzymes inhibition of sperm glycoloysis, especially enzymes glyceraldehyde-3phosphate dehydrogenase (GAPDH) and triosephosphate isomerase (Jones & Porter, 1995, Lynch et al., 1998) Conclusion These results show that prepubertal rats exposed to 3-MCPD might present alterations, number of Sertoli and germ cells, relative sperm production and motility, with impact on sperm quality, similar to impairment caused by exposure of adult animals reported in the literature.Since chloropropanols are present in the daily diet of population is essential determining their potential risks to human reproductive health.Thus, additional studies are needed to investigate more deeply the mechanisms involved in these reproductive effects observed and the relationship between the changes in sperm parameters and possible impact on fertility capacity. Table 1 . Final body weight and reproductive organ weights of rats from control and treated with 2.5 (TA), 5 (TB) and 10 mg kg -1 day -1 (TC) of 3-MCPD groups. Table 3 . Stages of spermatogenesis and Sertoli cells and germ cells number in rats from control and treated with 2.5 (TA), 5 (TB) and 10 mg kg -1 day -1 (TC) of 3-MCPD groups.Values expressed as median (Q1 -Q3).Kruskall-Wallis test, with a posteriori Dunn test.b Values expressed as mean ± S.D. ANOVA with a posteriori Tukey test.Different letters indicate statistically significant difference (p<0.05). a have been associated to motility change.
3,775.8
2017-06-16T00:00:00.000
[ "Biology", "Agricultural And Food Sciences", "Medicine" ]
Likelihood in Choosing Technopreneurship as Career among Undergraduate Students Technopreneurship is important for businesses to stay competitive in the Fourth Industrial Revolution (IR 4.0) era. It is also a viable strategy to overcome unemployment among youths. However, the development of technopreneurship is facing various challenges and the result is considered less than satisfactory. As such, this study aimed to identify the extend of likelihood in choosing technopreneurship as career among university students and the characteristics pertaining to it. A total of 216 undergraduate students from a public university were surveyed through electronic questionnaire. Subsequently, median-split method was used to categorize the students into “low likelihood” and “high likelihood”. Cross tabulation was performed to determine the characteristics pertaining to students’ likelihood of becoming technopreneurs. Based on the results obtained, this study concluded that students’ likelihood in becoming technopreneurs was not related to gender and family owning business background. However, their technopreneurship career choice was relevant to businessand technology-related study background, living in urban areas and e-commerce experience. This study further suggested that higher learning institutions (HLIs) and government should constantly offer well-planned technopreneurship courses or trainings, improve technology infrastructure and provide technopreneurship support to enhance the development of technopreneurship. Introduction Entrepreneurial activities are important to a country's development. As businesses are entering a new era known as Fourth Industrial Revolution (IR 4.0), entrepreneurs need to change their ways of doing business as well. As known, IR 4.0 emphasizes on use of high-level technology in businesses such as automation, Internet of things and smart technology; there is a need for traditional entrepreneurs to shift towards technopreneurship. Although technopreneurship or technology-based entrepreneurship is important in the future competitive landscape, many people still regard it as a new breed of entrepreneurship. As such, many challenges remain, especially in training and development of entrepreneurship (Jusoh & Halim, 2006;Tan, Karl & Mohamed, 2010). It is important to note that technopreneurship is not only important for a nation's development, but also a solution to unemployment. For instance, Indian government has started to train their youths to embark on technopreneurship as a strategy for unemployment (Paramasivan & Selladurai, 2017). As urged by Otamiri and Goodlife (2019), youths should stop waiting for government jobs, but should start up small-scale technology-based businesses by themselves. For years, Malaysian government has exerted various efforts in developing young entrepreneurs. For instance, various initiatives, plans and schemes have been carried out by the government to encourage more involvement in entrepreneurship among undergraduate students. For instance, Entrepreneurship Action Plan 2016-2020 was introduced to encourage students to gain personal income while studying and it emphasized on the concept of "earning while learning". Despite the various encouragements given by the government, the number of undergraduate students who became entrepreneurs were still small. Data showed that only 3% of the students became entrepreneurs during their university time (Bernama, 2017). The phenomenon signifies that undergraduate students are not keen in choosing entrepreneurship as their career. Therefore, there is a need to further scrutinize the issue. Due to the cruciality of technopreneurship and low participation of students in entrepreneurial activities, there is a need to extend the study to examine the students' perception on choosing technopreneurship as their career. Furthermore, studies of technopreneurship as career choice among university students is still scarce in the literature. The lack of studies pertaining to technopreneurship has caused a lacuna in the entrepreneurship literature. Therefore, this study was performed with the following objectives: • To identify the extend of likelihood in choosing technopreneurship as career among undergraduate students; and • To determine the characteristics pertaining to likelihood in choosing technopreneurship as career. Literature Review Technopreneurship can be considered as a sub-field in entrepreneurship. Selladurai (2016) explained technopreneurship as a process of merging technological expertise and entrepreneurial skills and talents. It is important to note that technopreneurship is a process, in which organizational creativity and innovation are used to solve organizational problems for satisfying economic performance (Fowosire, Idris, & Elijah, 2017). Therefore, technopreneurs could be described as someone who thinks like an engineer and acts like an entrepreneur (Paramasivan & Selladurai, 2017). From the perspective of Malaysia, Jusoh and Halim (2006) explained technopreneurship as technical entrepreneurs or technology-based entrepreneurs who are represented by small and medium enterprises (SMEs), seed level and start-ups in information and communication technology (ICT) and multimedia sectors. Based on the above explanations, it could be said that technopreneurship is the combination of technology and entrepreneurship for the purpose of economy development and sustainability. Since technopreneurs are well equipped with technical and business skills, they are important in improving and redefining the dynamic digital economy constantly and continuously (Fowosire et al., 2017). Technopreneurship is closely related to ICT and multimedia; thus, it plays a crucial role in expanding and accelerating business and people. It is also important in growing and developing entrepreneurs in the knowledge-based economy, competing in the borderless world and achieving sustainability (Jusoh & Halim, 2006). From the corporate perspective, technopreneurship is important in creating competitive advantages in enterprises and organizations (Dolatabadi & Meigounpoory, 2013). Specifically, it helps universities to commercialize innovations in universities through patenting, licensing and other types of intellectual properties because technopreneurship emphasizes on innovation. Moreover, it also allows transfer of technology to happen between universities and business communities (Lakitan, 2013). True, technoprenuers help to dominate challengers in the technology world and therefore developing greater number of young technoprenuers is important (Paramasivan & Selladurai, 2017). Despite its significant contributions to technological and economic development, technopreneurship could also be a viable career choice for youths. Otamiri and Goodlife (2019) pointed out that technopreneurship could help to resolve unemployment among youths through jobs creation and sustainable income generation. Young individuals especially those who have graduated from university should consider embarking on technopreneurship for job security and regular income generation. As supported by Ikhtiagung and Aji (2019), tertiary education graduates should not be oriented as job seekers only, but they should become job creators or entrepreneurs. For years, Malaysian government has promoted youth entrepreneurship as a mean to reduce unemployment and demonstrate youth's capabilities (Khan, Noor & Anuar, 2016). Therefore, developing competitive technopreneurs should start from as early as tertiary education. However, the decision to take up technopreneurship as career depends very much on the youth's mindset. They should be taught to like and show favor for technopreneurship. Changing people's mindset to allow them to think innovatively and developing a technopreneurial culture is important in developing a greater number of technopreneurs (Amante & Ronquillo, 2016). Dolatabadi and Meigounpoory (2013) pointed out that individual factor, such as personal experiences, psychological features and motives would affect corporate technological entrepreneurship. Specifically, higher learning institutions (HLIs) such as universities, polytechnics and colleges are playing a significant role in educating and motivating the young students to be enthusiastic towards technopreneurship (Ikhtiagung & Aji, 2019). However, technopreneurship is still considered an emerging concept particularly in developing countries (Selladurai, 2016). As Fowosire et al. (2017) mentioned, technopreneurship faces various challenges. Indeed, the creation and development of technopreneurship is subject to various issues such as motivation, risks, obstacles, growth and infrastructure (Jusoh & Halim, 2006). It is believed that understanding people's mindset on technopreneurship is the first step in developing technopreneurs. Research Method The population of this study comprised of full-time final-year bachelor's degree students from a local public university. They were deemed appropriate because they have completed entrepreneurship course. Moreover, they would graduate soon and start to search for their own career after graduation. This study randomly selected 250 sample from the seven faculties in the university. The sample size was deemed appropriate because it was larger than 30 and less than 500 (Sekaran & Bougie, 2010). This study was exploratory in nature and it adopted descriptive analysis. It used questionnaire for data collection purposes. Electronic questionnaires were sent to the respondents through various social media. The questionnaire collected respondent's background information by using multiple choice questions. To identify the extend of likelihood in choosing technopreneurship as career, respondents were asked to indicate their choice based on a seven-point rating scale, ranging from 1=very unlikely to 7=very likely. Upon the collection of data, this study performed median-split analysis to turn an ordinal variable into a categorical one. Specifically, the median for likelihood in choosing technopreneurship as career was identified, then any value below the median was put in the category "low likelihood" and every value above it was known as "high likelihood" (Field, 2018, Martin, n.d.). Subsequently, cross-tabulation was performed to determine the characteristics pertaining to likelihood in choosing technopreneurship as career. Findings and Discussions Results of Analysis This study distributed 250 questionnaires to the students. However, only 216 questionnaires were deemed usable. Thus, the response rate was 86.40%. Table 1 depicts the respondent's background information. There were more female students (F=140; 64.81%) than male students. About one third (F=73; 33.80%) of the students were from Faculty of Business and management. More than half of the them lived in urban areas (F=132, 61.11%). About half of their family members were not owning any business (F=118; 54.63%). It was rather motivating to find that more than 90% of the students had experience in using e-commerce as buyer only (F=27; 12.50%), seller only (F=70, 32.41%) or both buyer and seller (F=104; 48.15%). Figure 1 summarizes the student's likelihood in choosing technopreneurship as career. Most students rated 4-Neutral for likelihood in choosing technopreneurship as career (F=78; 36.11%) followed by 5-somewhat likely (F=51; 23.61%), 6-likely (F=48; 22.22%) and 7-very likely (F=19; 8.80%). It could be said that students were rather positive in becoming technopreneurs in future. It is also worth mentioning that the mean value obtained was 4.81 and the median was 5.00. Figure 1: Frequency of Likelihood in Choosing Technopreneurship as Career As mentioned in the previous section, this study used median-split method to categorize the students into low likelihood and high likelihood in choosing technopreneurship as career. The results are demonstrated in Table 2. Following the suggestion by DeCoster, Gallucci and Iselin (2011), cases below the median will be put in "low" group, cases above the median will be put in "high" group and cases exactly at the median can be put in either group. In this study, in order to make the groups more equivalent in size, respondents who scored exactly at median (5.00) and above in likelihood in choosing technopreneurship as career were grouped as "high likelihood" (F=118; 54.63%). Meanwhile, those who rated below 5.00 were categorized as "low likelihood" (F=98; 45.37%). Cross tabulation was performed to further categorize the respondents according to their extend of likelihood in choosing technopreneurship as career and personal background. The results are showed in Table 3 to Table 7. In terms of gender (Table 3), there were 45 male students (59.21%) and 73 female students (52.14%) showed high likelihood in choosing technopreneurship as career. About 41% of the male students and less than 48% of the female were not interested in embarking on technopreneurship. As a comparison between students living in urban and sub-urban areas ( Table 5), majority of the students who were from city areas (F=91; 68.94%) showed high likelihood in choosing technopreneurship as career. In addition, for students who stayed in sub-urban areas, they tended to show low likelihood (F=57; 67.86%) in choosing technopreneurship as career. Table 6 illustrates that having family members who own a business might not be important in affecting student's career choice. This study found that for students who had family members owning a business, majority of them were prompted to choose technopreneurship as career (F=57; 58.16%). Surprisingly, the percentage of students who would like to choose technopreneurship as career was also rather high for those who were from non-business owning family (F=61; 51.69%). Based on Table 7, it depicts that different e-commerce experience would cause students to have different level of likelihood in choosing technopreneurship as career. It is understandable that students who did not have any e-commerce experience would show lower likelihood in choosing technopreneurship as career (F=11; 73.33%). As for students who had experience in using ecommerce as buyer (F=16; 59.26%), seller (F=41; 58.57%) or both buyer and seller (F=57; 54.81%), they all showed great favor in becoming technopreneurs in future. Discussions The results from this study depicted that students were prompted to choose technoprenuership as career regardless of their gender and family owning business background. However, students who were from business-related, and science and technology-related education background, lived in urban areas and had e-commerce experience were more likely to choose technopreneurship as their career. As explained by Hidayat and Yunus (2019), technology literacy which refers to ability in using technology effectively is inseparable from entrepreneurship. It also plays an important role in enabling entrepreneurs to face the Fourth Industrial Revolution (IR 4.0) which emphasizes in utilization technology in works. Therefore, students who had sound technology knowledge and business skills would be prone to become technopreneurs in future. In addition, from the Malaysian perspective, technopreneurship is closely related to ICT and multimedia (Jusoh and Halim, 2006); as such, students might be interested to embark on technopreneurship because they have great experience in utilizing e-commerce either as buyer, seller or both. As business, and science and technology related education background is important, potential technopreneurs should be equipped with both technical knowledge and business skills. Technopreneurship can be taught through proper education system. As found by Amante and Ronquillo (2016), students who have attended technopreneurship course showed a change in their mindset from being employed to being own employer. Therefore, technopreneurship education system is an excellent service for the welfare of young generations through transforming the youths into technopreneurs (Paramasivan & Selladurai, 2017). True, farming programs such as incubation and communication programs are important (Jusoh & Halim, 2006). As suggested by Otamiri and Goodlife (2019), business incubation centers are crucial in training the youths for entrance into technopreneurial ventures. As such, Malaysian Ministry of Education (MOE) and Ministry of Higher Education (MOHE) should consider setting up business incubators for students from different learning levels. The business incubators should focus on building technology know-how, business and management skills, creativity an innovativeness among students. The academic staff assigned to teach technopreneurship courses should value creativity and innovation. They should also apply creative and interactive teaching methods. People living in urban areas may expose to better technology infrastructure, such as highspeed Internet, wide coverage of Wifi and advanced wireless and mobile technology. Moreover, high technology infrastructure in the urban areas also supports e-commerce. Therefore, students who lived in urban areas and had e-commerce experience were more prone in becoming technopreneurs. Nevertheless, the technology infrastructure requires support from the government. In addition, the external factor such as the role of government in developing and supporting technopreneurship should not be neglected as well (Dolatabadi and Meigounpoory, 2013). As Lakitan (2013) mentioned, government should establish favorable regulations and policies, inject risk capital and create supportive infrastructure. True, basic facilities such as electricity, telecommunication and Internet should be upgraded and enhanced. In addition, governmental agencies are also urged to provide various support facilities such as technopreneurship training and development programs (Khan et al., 2016). Although government intervention is important, precaution must be taken to ensure that technopreneurs are not too dependent on government aids (Fowosire et al., 2017). Conclusion This study aimed to identify the extend of likelihood in choosing technopreneurship as career and the characteristics pertaining to it. It concluded that gender and family owning business background are not important in affecting student's likelihood in becoming a technopreneur. However, business-related and technology-related study background, living in urban areas and ecommerce experience are playing a role in affecting student's technopreneurial career choice. This study has contributed to both literature and practice. Literally, it enriched the technopreneurship literature. In addition, it shed lights on the undergraduate student's likelihood in becoming technopreneurs in future. Practically, it suggested that HLIs and government are playing important roles in providing relevant technopreneurship education, improving technology infrastructure and rendering technopreneurship support to boost the technopreneurship development in the country. One of the main limitations of this study was the median-split method, which was subject to several shortcomings such as problems in splitting the data, smaller effect size and difficulties in finding effects (Field, 2018, Martin, n.d.). In addition, the sample was chosen from a public university only, which caused the results were not able to be generalized. Therefore, future studies are recommended to employ other data analysis methods, such as inferential statistical analyses. Future researchers are also suggested to extend the sample size by including students from other universities.
3,840.2
2020-05-24T00:00:00.000
[ "Business", "Education", "Economics" ]
Super Restoration of Chiral Symmetry in Massive Four-Fermion Interaction Models The chiral symmetry is explicitly and spontaneously broken in a strongly interacting massive fermionic system. We study the chiral symmetry restoration in massive four-fermion interaction models with increasing temperature and chemical potential. At high temperature and large chemical potential, we find the boundaries where the spontaneously broken chiral symmetry can be fully restored in the massive Gross--Neveu model. We call the phenomenon super restoration. The phase boundary is obtained analytically and numerically. In the massive Nambu--Jona-Lasinio model, it was found that whether super restoration occurs depends on regularizations. We also evaluate the behavior of the dynamical mass and show the super restoration boundaries on the ordinary phase diagrams. I. INTRODUCTION Chiral symmetry is a fundamental property of elementary particles.In the Quantum chromodynamics (QCD), quarks and gluons are confined into hadrons, which is related to the chiral symmetry breaking.The QCD Lagrangian for the light quarks possesses an approximate chiral symmetry due to non-vanishing current quark masses.At low energy scale, the constituent quark masses are dynamically generated by the spontaneous chiral symmetry breaking.It is expected that the thermal effect restores the broken chiral symmetry at high temperature and/or large chemical potential.One of interesting topics in QCD is to investigate chiral symmetry breaking and restoration. We can not avoid the non-perturbative effect of QCD in order to study the chiral symmetry breaking.One of possible procedures is to consider the phenomena in a low energy effective model of QCD.The Gross-Neveu (GN) model [1] is often used to investigate the phase structure of the chiral symmetry in extreme conditions.It is a renormalizable model with a four-fermion interaction since the model is defined in a two-dimensional spacetime.In the original GN model, the discrete Z 2 chiral symmetry prohibits mass terms.The four-fermion interaction induces non-vanishing expectation value for the composite operator constructed by the fermion and the anti-fermion, and the chiral symmetry is broken spontaneously.For 2 ≤ D < 4, four-fermion interaction models are renormalizable in a sense of the 1/N c expansion and possess an ultraviolet-stable fixed point [2][3][4].The broken chiral symmetry is restored at high temperature and/or large chemical potential and the phase boundary for the massless model has been analytically and numerically shown in Ref. [5]. In the massive GN model, the mass term breaks the chiral symmetry explicitly, and it is expected to avoid the no-go theorem [6][7][8] with the mass term even with the axial interaction term at zero temperature.The Nambu-Jona-Lasinio (NJL) model [9,10] is a four-fermion interaction model with the current quark mass terms and considered as one of the low energy effective models of QCD.The NJL type models often used in the analysis of phase diagrams at a finite temperature, T , and chemical potential, µ.Since the model contains the four-fermion interaction and is not renormalizable in the four-dimensional space-time, the model prediction may depend on the regularization method.One introduces a cutoff scale to remove the UV divergence in the fermion loops.The three-dimensional momentum cutoff scheme is often adopted (for reviews, [11][12][13][14]) to investigate the phase diagrams.Other methods are also considered in the model, the Pauli-Villars [15][16][17][18], the Fock-Schwinger proper-time [19][20][21][22][23][24][25][26][27][28][29], and the dimensional regularization [30][31][32][33][34][35][36].Some papers comprehensively study these differences in the regularizations [37,38]. The paper is organized as follows: in Sec.II, the massive GN model is introduced.We renormalize the four-fermion coupling and mass parameter to obtain a well-defined effective potential at a finite temperature and chemical potential.In Sec.III, we derive the equation for the condition of super restoration from the gap equation.We evaluate the behavior of the dynamical mass, and draw the phase boundary of the super restoration and the boundary of the chiral condensate on a µ-T plane.In Sec IV, we also compute the same physical quantities in the two-flavor NJL model with two different regularizations.Finally, summary and discussions are given in Sec.V. II. GROSS-NEVEU MODEL We briefly introduce the action of the massive GN model on the D-dimensional spacetime (2 ≤ D < 4), where m 0 and λ 0 are a bare mass parameter and a bare coupling of a four-fermion interaction, respectively.N denotes the number of copies of fermions ψ(x).In the massless case, m 0 = 0, this model enjoys the discrete chiral Z 2 symmetry. Introducing an auxiliary field, σ(x), to this action, we obtain a redefined action, In this expression (2), the constant term, N m 2 0 /(2λ 0 ), is dropped because the term does not affect physical quantities.The original action (1) is reproduced by substituting the solution of the equation of motion, σ(x) = −λ 0 ψ(x)ψ(x)/N + m 0 , after returning the dropped term to this action (2).Assuming that the auxiliary field is constant (corresponding to considering only the homogeneous chiral condensate), σ(x) = σ, we obtain the effective potential in the leading order of the 1/N expansion, with a constant, In the massless case, m 0 = 0, it is well-known fact that, under a certain renormalization condition, the gap equation, ∂V D (σ)/∂ σ| σ=mχ = 0, can be solved exactly.For instance, under the renormalization condition, where µ r and λ r stand for a renormalization scale and a renormalized coupling respectively, the solution is given by, The effective potential with the renormalized coupling still contains a divergence related to the bare mass parameter.To remove the divergence, we also have to renormalize the mass parameter.We here choose a renormalization condition, with m r denoting the renormalized mass parameter.Under this condition, the mass parameter gives the tilt of the effective potential, and the solution of the gap equation converges to m r in the weak coupling limit, λ r → 0; in the following, we rename m r a current mass.Thus the renormalized effective potential is given by In the two-dimensional limit, it is confirmed that this expression is consistent with Ref. [39]. To consider the GN model at a finite temperature, T , and a chemical potential, µ, we modify the action (1) by using the Matsubara formalism.The effective potential reads where CD = (4π) −(D−1)/2 tr I/Γ D−1 2 . In this expression, the thermal part (the second line in Eq. ( 8)) is separated from the vacuum part (the first line in Eq. ( 8)).We use the symbol M as the solution of the gap equation, ∂V D (σ; m r , µ r ; T, µ)/∂ σ| σ=M = 0, in this paper; in our definition of σ, the solution of the gap equation is equivalent to the dynamically generated fermion mass. III. SUPER RESTORATION A. Analytical results At zero temperature and zero chemical potential, the dynamically generated fermion mass is always not zero in the massive theory, and converges to m r from above in the weak coupling limit, λ r → 0. It is also well known that the chiral symmetry tends to be restored at high temperature or large chemical potential.In the massless theory, we can observe the first-order phase transition boundary at low temperature and large chemical potential, and the second-order one at high temperature and small chemical potential on a µ-T plane.In this section, we show that the dynamical mass decreases below the current mass, m r , at a high/large but finite temperature and chemical potential; we call such a phenomenon super restoration. We consider a solution of the gap equation satisfying M ≤ m r on a µ-T plane, If such a solution exists and is continuous outer the first-order phase transition boundary on the µ-T plane, at least there is the solution that satisfies M = m r .Supposing M = m r (> 0) in this equation, we obtain This expression can be taken the chiral limit, M = m r → 0. The reduced expression is given by where Li a (s) is a polylogarithm.Equations ( 10) and (11) does not indicate the existence of the solution, but give the way to confirm it.At zero chemical potential, the equation can be solved exactly as with T denoting the temperature at which the dynamical mass becomes m r in the chiral limit.The specific values are e 1+γ /π ≃ 1.54 in D = 2 and 1/ ln 2 ≃ 1.44 in D = 3.They are about two to three times higher than the critical temperature (e γ /π ≃ 0.57 in D = 2 and 1/ ln 4 ≃ 0.72 in D = 3) in the massless theory [5,40].In the same way, we can solve exactly the equation at zero temperature, with μ denoting the chemical potential at which the dynamical mass becomes m r in the chiral limit.The specific values are e/2 ≃ 1.36 in D = 2 and 2 in D = 3.These are also about two times larger than the critical chemical potential (1/ √ 2 ≃ 0.71 in D = 2 and 1 in D = 3) in the massless theory [5,40].The super restoration boundary in the chiral limit is described by Eq. ( 11) as the curve connecting T and μ on the µ-T plane.For a current mass enough small, the super restoration boundary in the chiral limit gives approximately the boundary at a finite current mass. B. Numerical results The solution of the gap equation in the chiral limit, M = m r → 0, has been found analytically; in particular, we have found the specific values at zero temperature or chemical potential.We here calculate numerically the super restoration boundary at a finite m r based on Eq. ( 9) and plot it on the µ-T plane.In our calculations, we set the coupling constant satisfying M/µ r = 1 in the trivial condition (m r = 0, T = 0 and µ = 0) for arbitrary dimensions and trI = 2 D/2 . Behavior of the dynamically generated fermion mass as a function of temperature and chemical potential is shown in Figs. 1 (D = 2) and 2 (D = 3) with a fixed current mass, m r = 0.2.On the left figure, a second-order phase transition is replaced with a crossover due to the massive theory.As the figures show, the dynamical fermion mass smoothly becomes below the current mass at high temperature or large chemical potential. The super restoration boundaries are shown as outer lines in Fig. 3; the solid and dotted lines denote the chiral limit and the massive cases with m r /µ r = 0.2 (circles) and 0.5 (triangles), respectively.To compare with these, we also plot first-order phase transition boundaries (solid) and peaks of the chiral susceptibility (dotted) that is defined by ∂M/∂m r at m r /µ r = 0.01 (inner) and m r /µ r = 0.2 (middle).In three dimensions, there is no phase boundary and the lines of the chiral susceptibility are curved to extend the area enclosed by the lines at low temperature and large chemical potential.Since the lines are drawn in terms of the susceptibility, it does not necessarily coincide with the lines where the dynamical mass changes the most. The figures show that the super restoration boundaries almost overlap each other.The area enclosed by them tends to shrink in particular at high temperature and small chemical potential.On the other hand, the area tends to expand at low temperature and large chemical potential in D = 2.These indicate the different behavior of the super restoration boundaries from the first-order phase transition boundary and the peaks which expand monotonically with increasing m r .It is also observed that the difference of the super restoration boundaries caused by the current mass change is slighter than the boundary of the first-order phase transition and the peaks of the chiral susceptibility. IV. TWO-FLAVOR NJL MODEL In this section, we apply the previous discussion to the two-flavor NJL model on the four-dimensional spacetime.The Lagrangian of the two-flavor NJL model with current quark mass is given as, with a mass matrix m = diag(m u , m d ), an effective coupling constant G, the number of colors N c and the Pauli matrices of the isospin vector τ a (a = 1, 2, 3).For simplicity, we assume m u = m d .While the Lagrangian has the SU (2) L × SU (2) R global symmetry at the massless limit, the symmetry is explicitly broken down to the SU (2) L+R because of the non-zero quark masses.Since the effective coupling has the negative mass dimensions and the model cannot be renormalizable, one has to introduce the momentum cutoff to evaluate the physical quantities. In the leading order of the 1/N c expansion, we obtain the effective potential, Fig. 3: Behavior of (outer) the super restoration boundaries (solid: the chiral limit, circles: mr = 0.2, and triangles: mr = 0.5) with (inner: mr = 0.01 and middle: mr = 0.2 lines) the first-order phase transition boundaries (solid) and the peaks of the chiral susceptibility (dotted). with the auxiliary scalar field σ ≃ −(G/N c ) ψψ.From the stationary condition of the effective potential, the gap equation is expressed as follows, where M is the constituent mass M = m u + ⟨σ⟩, the trace is the sum over spinor indices and S(M ) is the quark propagator, itr S(M ) = −tr The momentum integral in Eq. ( 17) is divergent, and we introduce the momentum cutoff scale later.Since we are interested in the behavior of the constituent mass in a thermal system, we apply the imaginary time formalism with a chemical potential to the model.By using the formalism, the gap equation at a finite temperature and chemical potential is derived as follows, with E(M ) = q 2 + M 2 and E ± = E(M ) ± µ. Equation ( 19) is the zero temperature part of the gap equation and is equal to Eq. ( 17) integrated with respect to q 0 .We regularize the integral in Eq. ( 19) using three-momentum cutoff scale Λ, and adopt the values of Ref. [13], Λ = 631 MeV and G/(2N c ) ≃ 5.51 × 10 −6 MeV −2 .Equation ( 20) is the finite temperature and chemical potential part of the gap equation.Unlike the zero temperature part, the momentum integral is finite for Λ → ∞.In this paper, we consider two regularizations in the integral of Eq. ( 20) because the value is finite whether the limit is taken or not.In the case 1, the same cutoff scale is used as the zero temperature part.In the case 2, the momentum scale is integrated out and Eq. ( 18) is written as follows, In this case, the effect of the finite temperature and chemical potential is independent of the regularization scale. The effective potential at a finite T and µ is written as follows, In the case 2, we use the following effective potential, We show the behavior of the constituent mass as a function of T in Fig. 4. The thermal effect reduces the dynamical mass.As seen in the left figure, the thermal effect in the case 2 is larger than that in the case 1.At high temperature, it is found that the constituent mass, M , becomes smaller than the current quark mass, m u = 5.5 MeV, in the case 2. The super restoration takes place in the case 2, but not in the case 1.We show the behavior of the constituent mass as a function of µ at T = 10 MeV in Fig. 5. From the left figure, the effects of the chemical potential are indistinguishable in each case.The right figure shows that the constituent mass, M , approaches the current quark mass, m u , and becomes smaller than m u for large chemical potential, µ ≳ Λ, in the case 1 and 2, respectively. We show the behavior of the effective potential in the high temperature region in Fig. 6.In the case 1 (solid), the minimum of the effective potential i.e., the expectation value of σ becomes smaller as T increases.Since the chiral symmetry is explicitly broken due to the current quark mass, the minimum of the effective potential does not become 0. On the other hand, in the case 2 (dashed), the minimum of the effective potential becomes 0 at T ≃ 350 MeV.The expectation value of σ changes from positive to negative as T increases.The negative value of ⟨σ⟩ causes the constituent quark masses to be smaller than the current quark masses.This means that the chiral condensate takes place to counteract the explicit symmetry breaking at high temperature.We draw the peaks of the chiral susceptibility [41,42] and the super restoration boundary in Fig. 7.The area enclosed by the line of the maximum of the chiral susceptibility in the case 2 is smaller than that in the case 1.This feature is related to the fact that the thermal contribution in the case 2 is larger than the case 1 in Fig. 4. The difference between both the cases depends on whether the radiative corrections with higher momentum quarks are dropped or not from the temperature effect.In the low temperature and large chemical potential region, the phase boundaries of the first-order phase transitions for both the regularizations are almost identical, because the contribution is negligible from the momentum higher than the Fermi momentum.The super restoration occurs only in the case 2. It is found at about twice the temperature and chemical potential of the first-order phase transition and the peak of the chiral susceptibility.Although the dimensions are different, the NJL model in the case 2 produces a similar behavior of the super restoration in the GN models.The results of the GN models in two and three dimensions are independent of the regularization procedures because of their renormalizability. V. SUMMARY AND DISCUSSIONS We have evaluated the massive four-fermion interaction models by using the effective potential in the leading order of the 1/N expansion under the assumption that the chiral condensate is spatially homogeneous. First, we have considered the massive GN model on the D-dimensional spacetime (2 ≤ D < 4).At zero temperature and chemical potential, the chiral symmetry is broken above the critical coupling, λ r > λ χ .The dynamically generated fermion mass approaches the current mass, m r , from above in the weak coupling limit, λ r → 0. On the other hand, at a finite temperature and chemical potential, we have found the boundary where the dynamical fermion mass coincides with the current mass for a finite coupling, λ r , on the µ-T plane as shown in Fig. 3 (with Figs. 1 and 2) based on Eq. ( 9).We call this boundary super restoration boundary.It is different from the well-known phase boundaries for the chiral symmetry.The super restoration boundary is insensitive to changing the current mass, and the boundary at the massless limit gives a good approximation for a finite m r case. Next, we have investigated the super restoration in the NJL model, a prototype model of QCD.In four dimensions, the four-fermion models are non-renormalizable and the results depend on the regularization procedures.We have employed two regularization procedures: the momentum cutoff is imposed to both the vacuum and thermal parts of the effective potential (case 1), and to only the vacuum part (case 2).In the case 1, the dynamical fermion mass approaches but does not decrease below the current mass at high temperature and large chemical potential.In the case 2, we have found the super restoration boundary where the dynamical fermion mass decreases across the value of the current mass. In the models containing the explicit symmetry-breaking term (current quark masses), we have found the super restoration boundaries on the µ-T plane.These boundaries represent the lines where the dynamical mass coincides the current mass.On the boundaries, the spontaneously broken chiral symmetry is fully restored.Outside the boundaries, the chiral condensate, ψψ , develops a positive value (e.g.⟨σ⟩ ≃ −(G/N c ) ψψ < 0), and the dynamical mass is smaller than the current mass.The super restoration takes place at high temperature and large chemical potential region that is difficult to reach experimentally in QCD. It has been pointed out that the inhomogeneous chiral condensate is favored at low temperature and large chemical potential [43,44].We will continue the work further and consider the inhomogeneous state.We are also interested in applying our results to other systems and expect that the super restoration may be observed in some physical phenomena.We hope to report on these problems in the future. 2 Fig. 4 : Fig. 4: Constituent mass M as a function of T at µ = 0 in the two cases.Horizontal dotted lines represent the value of the current quark mass.Right figure shows the high temperature region of the behavior for the constituent mass. 1 Fig. 5 : 2 Fig. 6 : Fig. 5: Constituent mass M as a function of µ at T = 10 MeV in the two cases.Horizontal dotted lines represent the value of the current quark mass.Right figure shows the large chemical potential region of the behavior for the constituent mass. Fig. 7 : Fig. 7: Behavior of the super restoration boundary (blue circles) with the first-order phase transition boundary (black solid) and the peaks of the chiral susceptibility (gray dotted: The outer and inner correspond case 1 and 2, respectively).
4,928
2023-06-01T00:00:00.000
[ "Physics", "Materials Science" ]
Low Temperature Adhesive Bonding-Based Fabrication of an Air-Borne Flexible Piezoelectric Micromachined Ultrasonic Transducer This paper presents the development of a flexible piezoelectric micromachined ultrasonic transducer (PMUT) that can conform to flat, concave, and convex surfaces and work in air. The PMUT consists of an Ag-coated polyvinylidene fluoride (PVDF) film mounted onto a laser-manipulated polymer substrate. A low temperature (<100 °C) adhesive bonding technique is adopted in the fabrication process. Finite element analysis (FEA) is implemented to confirm the capability of predicting the resonant frequency of composite diaphragms and optimizing the device. The manufactured PMUT exhibits a center frequency of 198 kHz with a wide operational bandwidth. Its acoustic performance is demonstrated by transmitting and receiving ultrasound in air on curved surface. The conclusions from this study indicate the proposed PMUT has great potential in ultrasonic and wearable devices applications. Introduction Ultrasound has been widely used in non-destructive testing (NDT) [1,2], medical diagnostics and therapy [3][4][5], and sensing detection [6][7][8] because of its exceptional features such as noninvasiveness, convenience, high penetrability and sensitivity. Ultrasonic transducers, which are key components of any ultrasound system, are either configured with inflexible structures or fabricated from bulk piezoelectric materials. These rigid architectures afford stable performance and favorable piezoelectric properties but prevent ultrasonic transducers from being used on irregular nonplanar surfaces which widely exist in real objects. Recent advances in flexible electronics provide innovative materials and fabrication processes making it possible to realize flexible ultrasound devices that can be coupled with nonplanar surfaces [9][10][11][12][13][14][15]. For example, piezoelectric nanofibers with excellent properties are one of the types of materials proposed for use in wearable electronics [16][17][18][19], and some sensors can be easily embedded as a part of human skin or clothing for health monitoring by the near field electrospinning (NFES) technique [20][21][22]. Among these devices, flexible piezoelectric micro-ultrasonic transducers have advantages over traditional rigid ultrasonic transducers in terms of weight, volume, adaptability and portability. Available feasible strategies for fabricating flexible piezoelectric micro-ultrasonic transducers that can be mainly divided into two categories: island-bridge connection techniques and transfer printing techniques. In the former case, the flexibility is achieved by connecting bulk piezoelectric ceramic islands to each other using polymer joints or embedding piezoelectric ceramics into patterned polymer holes [23][24][25][26][27]. This island-bridge connection technique is based on micromachining electrode is patterned on PVDF film and the diameter is 460 μm. The well-known thermosetting polyimide (PI) has been used as the passive layer and bonded layer simultaneously. The thickness of PI can be precisely controlled by spin coating, and PI also has excellent versatility and machinability. The commercial Kapton film has been served as the substrate to form a suspended structure of composite plates. The sidewall is 750 μm in diameter and the cavity is 100 μm in depth. All of the materials we used are flexible, which leads to favorable flexibility of the whole device. There are two types of vibration mode in ultrasonic transducers: longitudinal vibration mode (also called thickness vibration mode) and flexural vibration mode. For the former, the resonant frequency of the ultrasonic transducer is directly proportional to the longitudinal wave velocity of the piezoelectric material and is inversely proportional to the thickness of the piezoelectric layer [37]. Once the piezoelectric material has been chosen, the longitudinal wave velocity is immutable consequently, resulting in that the resonant frequency is solely dependent on the thickness, which limits the geometrical dimensions and design flexibility. In comparison, the flexural vibration mode is not directly related to the thickness of piezoelectric materials. Instead, the shape, dimensions, and boundary conditions all affect the resonant frequency, which makes the design more flexible and extensible. For our PMUT, the flexural vibration mode of the circular film is utilized. The PVDF film is polarized in the direction of its thickness, and the natural vibration frequency of a circular film with an edge-fixed boundary condition can be computed from the following equations [38]: where is the membrane radius, ℎ is the thickness of the membrane, is the mass density of the membrane, is flexure rigidity, is Young's modulus, ν is Poisson's ratio, and is the natural frequency constant. The first three natural frequency constants for fixed edge boundary condition are given as follows: = 3.196 , = 4.611 , = 5.906 [39]. The typical values of the mass density, Young's modulus, and Poisson's ratio of PVDF film are 1780 kg m ⁄ , 3 GPa, and 0.29 in sequence [40]. The thickness of PVDF is 28 μm and the membrane diameter is set to 750 μm. Through Equations (1) and (2), we can calculate that the first resonance frequency of PVDF film is 130.2 kHz, which is suitable for air-coupled applications. Realistically, all of the diaphragms including the piezoelectric layer and electrodes have contributed to vibration. In the case of multilayer diaphragms, various material constants should be considered. Here, the finite element analysis (FEA) method was also applied to predict the resonant frequency of composite diaphragms more reasonably. A simulation model of the PMUT built using COMSOL Multiphysics 5.3 is shown in Figure 3a. The 2D axisymmetric model and piezoelectric devices multiphysics interface were chosen. The geometry has been built in software by considering all the dimensions and material constants. The thicknesses of the top Ag electrode, PVDF film, bottom Ag electrode, and PI passive layer are 10, 28, 10 and 4 μm, respectively. The air cavity beneath the stack is 750 μm in diameter and the top electrode is optimized at 460 μm in diameter (see the section There are two types of vibration mode in ultrasonic transducers: longitudinal vibration mode (also called thickness vibration mode) and flexural vibration mode. For the former, the resonant frequency of the ultrasonic transducer is directly proportional to the longitudinal wave velocity of the piezoelectric material and is inversely proportional to the thickness of the piezoelectric layer [37]. Once the piezoelectric material has been chosen, the longitudinal wave velocity is immutable consequently, resulting in that the resonant frequency is solely dependent on the thickness, which limits the geometrical dimensions and design flexibility. In comparison, the flexural vibration mode is not directly related to the thickness of piezoelectric materials. Instead, the shape, dimensions, and boundary conditions all affect the resonant frequency, which makes the design more flexible and extensible. For our PMUT, the flexural vibration mode of the circular film is utilized. The PVDF film is polarized in the direction of its thickness, and the natural vibration frequency of a circular film with an edge-fixed boundary condition can be computed from the following equations [38]: where r is the membrane radius, h is the thickness of the membrane, ρ is the mass density of the membrane, D is flexure rigidity, E is Young's modulus, ν is Poisson's ratio, and λ mn is the natural frequency constant. The first three natural frequency constants for fixed edge boundary condition are given as follows: λ 00 = 3.196, λ 01 = 4.611, λ 02 = 5.906 [39]. The typical values of the mass density, Young's modulus, and Poisson's ratio of PVDF film are 1780 kg/m 3 , 3 GPa, and 0.29 in sequence [40]. The thickness of PVDF is 28 µm and the membrane diameter is set to 750 µm. Through Equations (1) and (2), we can calculate that the first resonance frequency of PVDF film is 130.2 kHz, which is suitable for air-coupled applications. Realistically, all of the diaphragms including the piezoelectric layer and electrodes have contributed to vibration. In the case of multilayer diaphragms, various material constants should be considered. Here, the finite element analysis (FEA) method was also applied to predict the resonant frequency of composite diaphragms more reasonably. A simulation model of the PMUT built using COMSOL Multiphysics 5.3 is shown in Figure 3a. The 2D axisymmetric model and piezoelectric devices multiphysics interface were chosen. The geometry has been built in software by considering all the dimensions and material constants. The thicknesses of the top Ag electrode, PVDF film, bottom Ag electrode, and PI passive layer are 10, 28, 10 and 4 µm, respectively. The air cavity beneath the stack is 750 µm in diameter and the top electrode is optimized at 460 µm in diameter (see the section Optimization of the PMUT for details). For the material properties of PVDF the built-in material library in COMSOL Multiphysics 5.3 is employed and other properties are listed in Table 1. The fixed boundary conditions are applied at the edge of the model. Eigenfrequency analysis is used to predict the natural frequency and mode shapes of the structure. The first flexural mode shape of PMUT is shown in Figure 3b and the absolute value of the admittance is plotted in Figure 3c. From the FEA simulation results, the natural frequency is revealed to lie at approximately 202.58 kHz, which is higher than the theoretical analysis value on account of considering the effect of multilayer diaphragms. According to Equations (1) and (2), when r is fixed, the resonance frequency ( f ) is proportional to the first power of h. We assume that the membrane thickness has a greater effect on the resonant frequency than other parameters. In the case of multilayer diaphragms, h is defined as the thickness of the whole diaphragm, including top and bottom electrodes, PVDF piezoelectric layer, and PI passive layer, which is bigger than the thickness of a single PVDF layer, resulting in a higher frequency than given by the theoretical calculation. To verify this assumption, an additional FEA for only one 28 µm thick PVDF layer is applied. The dimension size and boundary conditions are the same as before, and the simulation results are shown in Figure 4. The resonant frequency of PVDF based monolayer structure is 126.16 kHz, which agrees with the theoretical calculation approximately. Thus, FEA is competent to predict the resonant frequency of composite diaphragms conveniently. simulation results, the natural frequency is revealed to lie at approximately 202.58 kHz, which is higher than the theoretical analysis value on account of considering the effect of multilayer diaphragms. According to Equations (1) and (2), when is fixed, the resonance frequency ( ) is proportional to the first power of ℎ. We assume that the membrane thickness has a greater effect on the resonant frequency than other parameters. In the case of multilayer diaphragms, ℎ is defined as the thickness of the whole diaphragm, including top and bottom electrodes, PVDF piezoelectric layer, and PI passive layer, which is bigger than the thickness of a single PVDF layer, resulting in a higher frequency than given by the theoretical calculation. To verify this assumption, an additional FEA for only one 28 μm thick PVDF layer is applied. The dimension size and boundary conditions are the same as before, and the simulation results are shown in Figure 4. The resonant frequency of PVDF based monolayer structure is 126.16 kHz, which agrees with the theoretical calculation approximately. Thus, FEA is competent to predict the resonant frequency of composite diaphragms conveniently. Fabrication Process The low temperature adhesive bonding fabrication process is divided into pre-processing and main processing steps. A laser precision machining system (ProtoLaser U3, LPKF Tianjin Co., Ltd., Tianjin, China) is used to quickly pattern printing masks and flexible substrates in pre-processing. Compared with micromachining fabrication techniques such as standard photolithography, depositing, and etching, laser precision machining is convenient and time-saving. The diameter of focused laser beam is 30 µm, and the minimum space of ultra-fine structures can reach 45 µm, which will meet the requirements of micro devices fabrication not only in design stage but also in massive production stage. Ten µm thick commercial Kapton tapes were subjected to the laser machining system to form printing masks, and 100 µm thick Kapton sheets were also processed by laser to gain flexible substrates. The key points for patterning Kapton polymers are controlling the laser power to completely ablate the pattern while avoiding the contour distortion caused by excessive ablation. The laser parameters have been optimized to process Kapton polymers with the highest yield, which are given in Table 2. Figure 5 shows the results of the laser-machined printing mask ( Figure 5a) and flexible substrate ( Figure 5b). The diameter of the through-hole on Kapton substrate is 750 µm and four symmetrical triangles placed at the edges were acted as the alignment marks in adhesive bonding process. On the other hand, the temporary carrier consisting of a layer of commercial thermal release tape (TRT, Shunsheng Electronics Co., Ltd., Shenzhen, China) mounted on a glass sheet also has been prepared in advance, which is shown in Figure 6. TRT is a kind of tape with feature of convertible adhesive force and is widely used for transferring graphene [41]. When it is heated to 90~100 • C, the adhesive force will vanish irreversibly. This characteristic of TRT will make it convenient for samples to transfer in fabrication process. heated to 90~100 °C, the adhesive force will vanish irreversibly. This characteristic of TRT will make it convenient for samples to transfer in fabrication process. The schematic main processing steps are illustrated in Figure 7. It started with patterning the top Ag electrode on PVDF film. The printing mask was pasted on the PVDF film and the conductive silver paste (0.3 mL, Zhimeikang Technology Co., Ltd., Shenzhen, China) was coated on it evenly using a special PET scraper (Figure 7a). A 30 min baking step at 60 °C was performed for curing conductive silver paste in thermostatic oven. After that, the printing mask was peeled off and the patterned Ag electrode was remained on PVDF film ( Figure 7b). Next, the Ag coated PVDF film was pasted on the temporary carrier upside down (Figure 7c). A layer of thermosetting PI (20% solid content, Qiancheng Plasticizing Material Co., Ltd., Dongguan, China) was spun at 3000 RPM onto the PVDF film to obtain a thickness of 4 μm (Figure 7d). To assemble the device, the prepared substrate was aligned and adhesively bonded on the PVDF film ( Figure 7e). Finally, the sample was baked at 60 °C for 10 min to solidify the PI layer and then baked at 95 °C for 5~10 s (Figure 7f) to deactivate the TRT and separate the device from the temporary carrier (Figure 7g). The schematic main processing steps are illustrated in Figure 7. It started with patterning the top Ag electrode on PVDF film. The printing mask was pasted on the PVDF film and the conductive silver paste (0.3 mL, Zhimeikang Technology Co., Ltd., Shenzhen, China) was coated on it evenly using a special PET scraper (Figure 7a). A 30 min baking step at 60 • C was performed for curing conductive silver paste in thermostatic oven. After that, the printing mask was peeled off and the patterned Ag electrode was remained on PVDF film ( Figure 7b). Next, the Ag coated PVDF film was pasted on the temporary carrier upside down (Figure 7c). A layer of thermosetting PI (20% solid content, Qiancheng Plasticizing Material Co., Ltd., Dongguan, China) was spun at 3000 RPM onto the PVDF film to obtain a thickness of 4 µm (Figure 7d). To assemble the device, the prepared substrate was aligned and adhesively bonded on the PVDF film ( Figure 7e). Finally, the sample was baked at 60 • C for 10 min to solidify the PI layer and then baked at 95 • C for 5~10 s (Figure 7f) to deactivate the TRT and separate the device from the temporary carrier (Figure 7g). The processing results of some steps are shown in Figure 8. Figure 8a shows the PVDF film was firstly fixed on the temporary carrier and then a laser processed printing mask was pasted on it. The four symmetrical triangles placed on the printing mask were coincided with the contour of PVDF film for aligning. Then the conductive silver paste was daubed repeatedly to ensure that the pattern of the electrode was copied from printing mask to PVDF completely, which was shown in Figure 8b. Because the adhesive force between PVDF film and temporary carrier is higher than it between PVDF film and printing mask, the printing mask could be peeled off carefully and the PVDF film remained on the temporary carrier. An optical image of the Ag patterned PVDF film is shown in Figure 8c. observation, no residues remained on the PMUT surface after deactivating the TRT. The device is not broken after the separation process according to simple capacitance testing. using a special PET scraper (Figure 7a). A 30 min baking step at 60 °C was performed for curing conductive silver paste in thermostatic oven. After that, the printing mask was peeled off and the patterned Ag electrode was remained on PVDF film (Figure 7b). Next, the Ag coated PVDF film was pasted on the temporary carrier upside down (Figure 7c). A layer of thermosetting PI (20% solid content, Qiancheng Plasticizing Material Co., Ltd., Dongguan, China) was spun at 3000 RPM onto the PVDF film to obtain a thickness of 4 μm (Figure 7d). To assemble the device, the prepared substrate was aligned and adhesively bonded on the PVDF film (Figure 7e). Finally, the sample was baked at 60 °C for 10 min to solidify the PI layer and then baked at 95 °C for 5~10 s (Figure 7f) to deactivate the TRT and separate the device from the temporary carrier (Figure 7g). The processing results of some steps are shown in Figure 8. Figure 8a shows the PVDF film was firstly fixed on the temporary carrier and then a laser processed printing mask was pasted on it. The four symmetrical triangles placed on the printing mask were coincided with the contour of PVDF film for aligning. Then the conductive silver paste was daubed repeatedly to ensure that the pattern of the electrode was copied from printing mask to PVDF completely, which was shown in Figure 8b. Because the adhesive force between PVDF film and temporary carrier is higher than it between PVDF film and printing mask, the printing mask could be peeled off carefully and the PVDF film remained on the temporary carrier. An optical image of the Ag patterned PVDF film is shown in Figure 8c. Figure 8d shows the sample fixed on temporary carrier ready for adhesive bonding. The substrate was picked up by a polydimethylsiloxane (PDMS) film covered glass sheet and then bonded on PVDF film without extra bonding pressure. The final device is shown in Figure 8e. Through optical microscope observation, no residues remained on the PMUT surface after deactivating the TRT. The device is not broken after the separation process according to simple capacitance testing. Optimization of the PMUT There are some main parameters of PMUT design, such as the thickness of the PVDF piezoelectric layer and the diameter of the cavity, whose changes will exert a great influence on the resonance frequency. These parameters can be regarded as non-optimizable values in this experiment. Besides, other parameters like the diameter of the Ag top electrode and the thickness of the PI passive layer can be considered as optimizable parameters because tiny changes won't affect Optimization of the PMUT There are some main parameters of PMUT design, such as the thickness of the PVDF piezoelectric layer and the diameter of the cavity, whose changes will exert a great influence on the resonance frequency. These parameters can be regarded as non-optimizable values in this experiment. Besides, other parameters like the diameter of the Ag top electrode and the thickness of the PI passive layer can be considered as optimizable parameters because tiny changes won't affect the resonance frequency greatly but could improve the performance of the device. First, we optimized the diameter of top Ag electrode to maximize the deflection displacement of composite diaphragms. The radius of the top electrode is used as a variable, and the deflection displacement of composite diaphragms is taken as the criterion. The thicknesses of the top Ag electrode, PVDF layer, bottom Ag electrode, and PI layer were 10, 28, 10 and 4 µm, respectively. The air cavity is also 750 µm in diameter. The top electrode radius ranges from 200 µm to 260 µm with a step size is 10 µm. A measuring point is located at the center of the upper surface of the model to record the deflection displacement in longitudinal direction. Figure 9a shows the deflection displacement of the measuring point under different frequencies. For comparison conveniently, the y-axis represents the ratio of the deflection with different radii to maximum deflection. From the chart, when the radius of the top electrode is 230 µm (the diameter is 460 µm), the deflection is maximum, which means the acoustic wave generated by the PMUT has the highest sound pressure level in this case. From the admittance chart shown in Figure 9b, we can also see the resonance frequency increases with the radius. The resonance frequencies of the PMUT with different radii of top electrodes are listed in Table 3. The arithmetic average frequency increase rate is 4.07%, which illustrates that the resonance frequency is affected by the diameter of top Ag electrode to a great extent. Table 3. The arithmetic average frequency increase rate is 4.07%, which illustrates that the resonance frequency is affected by the diameter of top Ag electrode to a great extent. 、、、、 Then we studied the influence of the PI passive layer on the deflection displacement of composite diaphragms. The role of the passive layer is to support the piezoelectric stack and provide mechanical restoring force at the same time. The thickness of the passive layer ranges from 1 µm to 7 µm with a step size is 1 µm. The diameter of the top electrode is set to 460 µm according to the above optimization Sensors 2020, 20, 3333 9 of 16 result, and the other parameters are maintained. A measurement point is placed at the center of the upper surface to record the longitudinal deflection. Especially, in an extreme case, no passive layer also has been considered. The displacement of the measurement point under various conditions is shown in Figure 10a. From this result, we can obviously see that the deflection has a maximum peak when the thickness of the passive layer is 4 µm. The absolute value of admittance is plotted in Figure 10b. The results show that the resonant frequency first increases and then decreases along with the increasing thickness of the passive layer, the lowest resonant frequency appears when the thickness is 3 µm. In the case of no passive layer, the resonant frequency has a huge shift but the deflection displacement has no obvious change. The arithmetic average frequency increase rate is 0.34%, which means that the thickness of PI passive layer has little effect on the resonant frequency. top electrodes are listed in Table 3. The arithmetic average frequency increase rate is 4.07%, which illustrates that the resonance frequency is affected by the diameter of top Ag electrode to a great extent. 、、、、 Simulation of the Sound Field Analyzing the sound field of an ultrasonic device is important for its applications. The acoustic-piezoelectric interaction, frequency domain multiphysics interface was adopted in COMSOL 5.3. In order to improve computation efficiency, a simulation model with 2D axial symmetry was established. A cylindrical air domain with a radius of 2 mm and a height of 3 mm was placed in front of the PMUT model. Perfectly matched layers (PMLs) were used to absorb the sound waves propagating to boundaries, and a fixed constraint was applied at the bottom of the PMUT model. A measuring point was placed in front of the PMUT with a distance of 2 mm. The PVDF piezoelectric layer was driven by the electrical field applied between the top and bottom electrodes. We set the frequency range from 160 kHz to 260 kHz with a step size of 1 kHz. The sound pressure level (SPL) distribution after the simulation is shown in Figure 11a. The frequency response curve of the measuring point is plotted in Figure 11b. There is a response peak at approximately 203 kHz, which is consistent with the resonant frequency of the PMUT. This result also means that the PMUT has better performance in transmitting ultrasonic waves at this frequency. The simulation result of the sound pressure field is shown in Figure 11c. The peaks and troughs of ultrasonic waves propagating in the air domain can be seen from this result. We roughly fitted the troughs of the ultrasonic waves with arcs, and then calculated the distance between two arcs by comparing the numbers on the axis labels. The distance is approximately 1.65 mm. The wavelength (λ) can be expressed as λ = c/ f where c is the sound speed in air medium, which is 340 m/s, and f is the frequency, which is 203 kHz. Through the equation, λ is calculated to be 1.67 mm, which is consistent with the measurement result. in the air domain can be seen from this result. We roughly fitted the troughs of the ultrasonic waves with arcs, and then calculated the distance between two arcs by comparing the numbers on the axis labels. The distance is approximately 1.65 mm. The wavelength (λ) can be expressed as λ = ⁄ where is the sound speed in air medium, which is 340 m s ⁄ , and is the frequency, which is 203 kHz. Through the equation, λ is calculated to be 1.67 mm, which is consistent with the measurement result. Frequency Response Analysis The image of the final PMUT is shown in Figure 12. It was temporarily mounted on a glass sheet for easy testing. We characterized the frequency response of the device in air using a Laser Doppler Vibrometer (PSV-500F-B, Polytec China Ltd., Beijing, China). The device was excited by a periodic chirp signal in the frequency range from 130 kHz to 250 kHz with a voltage amplitude of 80 Vpp. The Frequency Response Analysis The image of the final PMUT is shown in Figure 12. It was temporarily mounted on a glass sheet for easy testing. We characterized the frequency response of the device in air using a Laser Doppler Vibrometer (PSV-500F-B, Polytec China Ltd., Beijing, China). The device was excited by a periodic chirp signal in the frequency range from 130 kHz to 250 kHz with a voltage amplitude of 80 Vpp. The frequency response and the first vibration mode shape of PMUT are shown in Figure 13. From the results, we can see that there is a mild resonant peak at 198.37 kHz approximately, which is consistent with previous FEA simulations. The vibration region is located in the center of the grids (Figure 13a), corresponding to the position of the cavity. The vibration velocity in other regions is almost nil, indicating that the edge-fixed boundary condition is effective, which further illustrates the multilayered composite diaphragms are well bonded to the substrate without delamination. From Figure 13b, the frequency response curve has a feature with wide bandwidth, meaning that our PMUT exhibits a wide operational frequency range in applications. Sensors 2020, 20, x FOR PEER REVIEW 10 of 16 frequency response and the first vibration mode shape of PMUT are shown in Figure 13. From the results, we can see that there is a mild resonant peak at 198.37 kHz approximately, which is consistent with previous FEA simulations. The vibration region is located in the center of the grids (Figure 13a), corresponding to the position of the cavity. The vibration velocity in other regions is almost nil, indicating that the edge-fixed boundary condition is effective, which further illustrates the multilayered composite diaphragms are well bonded to the substrate without delamination. From Figure 13b, the frequency response curve has a feature with wide bandwidth, meaning that our PMUT exhibits a wide operational frequency range in applications. Mechanical Characterizations The effect of device bending on strain distribution was studied using FEA. Figure 14a shows that the maximum strain resides in the interface between the top electrode and PVDF film along the bending direction, due to the presence of the air cavity. The maximum strain increases exponentially with the decrease of bending radii (Figure 14b). Bending radii of 10, 8, 6 and 4 mm correspond to strain values of 0.11%, 0.17%, 0.31%, and 0.68%, respectively. Within the bending radius of 5 mm, the maximum strain is below 0.5%. There is a change point at the bending radius of 3 mm, corresponding to the maximum strain is 1.01%, which means that the performance of our device may degrade to Mechanical Characterizations The effect of device bending on strain distribution was studied using FEA. Figure 14a shows that the maximum strain resides in the interface between the top electrode and PVDF film along the bending direction, due to the presence of the air cavity. The maximum strain increases exponentially with the decrease of bending radii (Figure 14b). Bending radii of 10, 8, 6 and 4 mm correspond to strain values of 0.11%, 0.17%, 0.31%, and 0.68%, respectively. Within the bending radius of 5 mm, the maximum strain is below 0.5%. There is a change point at the bending radius of 3 mm, corresponding to the maximum strain is 1.01%, which means that the performance of our device may degrade to some extent. Further improvements to the flexibility of our device include using a thinner substrate, optimizing the device shape to avoid stress concentration, and adding a top encapsulation layer according to the neutral plane theory [42]. Sensors 2020, 20, x FOR PEER REVIEW 11 of 16 some extent. Further improvements to the flexibility of our device include using a thinner substrate, optimizing the device shape to avoid stress concentration, and adding a top encapsulation layer according to the neutral plane theory [42]. Our PMUT can easily achieve conformal contact with flat, concave, and convex surfaces as shown in Figure 15a. The curvature radius of concave and convex models is 25 mm, and our PMUT can properly fit curved surfaces. During the bending process of the multilayered composite films, the stress on the outer surface is largest, resulting in the largest deformation. When the deformation exceeds the critical value, the films will break along the bending direction. The degree of deformation can be expressed by the relative bending radius: ⁄ . Where is the bending radius, is the thickness of the film. After that, the surface strain ( ) of top Ag electrode, PVDF film, and Kapton Our PMUT can easily achieve conformal contact with flat, concave, and convex surfaces as shown in Figure 15a. The curvature radius of concave and convex models is 25 mm, and our PMUT can properly fit curved surfaces. During the bending process of the multilayered composite films, the stress on the outer surface is largest, resulting in the largest deformation. When the deformation exceeds the critical value, the films will break along the bending direction. The degree of deformation can be expressed by the relative bending radius: r/t. Where r is the bending radius, t is the thickness of the film. After that, the surface strain (ε) of top Ag electrode, PVDF film, and Kapton substrate can be calculated by the following equations: where t Ag , t PVDF , t Kapton are the thicknesses of the top Ag electrode, PVDF film, and substrate, which are sequentially 10, 28 and 100 µm. r is the radius of the model, which is 17.5 mm. Using these functions, the surface strain values can be calculated: ε Ag = 0.028%, ε PVDF = 0.08%, and ε Kapton = 0.28%. The curved film will not crack under the condition of ε ≤ δ max , where δ max is the elongation of the material [43]. Because the dimension of film thickness is in micro-scale, ε is much less than δ max , which will not affect the performance of the device. Under the premise of bending safely, a specially-made linear mobile platform was used to clamp and alternately bend or unbend the PMUT (Figure 15b). The testing results verified that our PMUT could endure biaxial bending forces and could survived consecutive mechanical deformations. We also pasted PMUT on the back of the hand and twined it around the wrist to demonstrate the suitability for skin applications, as shown in Figure 15c. These results confirmed that the flexibility and mechanical stability of our PMUT could satisfy the ordinary demands of wearable devices. Acoustic Characterizations An acoustic transmit-receive system was set up in air ambience for the acoustic characterization of our PMUT. The schematic diagram of this system is shown in Figure 16. A sine burst with five cycles was produced by a function generator (AFG1062 Tektronix, Dongfang Zhongke Integrated Technology Co., Ltd., Beijing, China) and amplified by a power amplifier (75A250A, 10 kHz~250 MHz, Amplifier Research Corporation, Souderton, PA, USA) to drive the transmitter. The resulting signal from the receiver was amplified using Manually Controlled Ultrasonic Pulser-Receiver (5072PR, OLYMPUS, Tairu Electronic Technology Co., Ltd., Beijing, China) in receiving mode and finally displayed on an oscilloscope (DPO 2014, Tektronix, Dongfang Zhongke Integrated Technology Co., Ltd., Beijing, China). Acoustic Characterizations An acoustic transmit-receive system was set up in air ambience for the acoustic characterization of our PMUT. The schematic diagram of this system is shown in Figure 16. A sine burst with five cycles was produced by a function generator (AFG1062 Tektronix, Dongfang Zhongke Integrated Technology Co., Ltd., Beijing, China) and amplified by a power amplifier (75A250A, 10 kHz~250 MHz, Amplifier Research Corporation, Souderton, PA, USA) to drive the transmitter. The resulting signal from the receiver was amplified using Manually Controlled Ultrasonic Pulser-Receiver (5072PR, OLYMPUS, Tairu Electronic Technology Co., Ltd., Beijing, China) in receiving mode and finally displayed on an oscilloscope (DPO 2014, Tektronix, Dongfang Zhongke Integrated Technology Co., Ltd., Beijing, China). Technology Co., Ltd., Beijing, China) and amplified by a power amplifier (75A250A, 10 kHz~250 MHz, Amplifier Research Corporation, Souderton, PA, USA) to drive the transmitter. The resulting signal from the receiver was amplified using Manually Controlled Ultrasonic Pulser-Receiver (5072PR, OLYMPUS, Tairu Electronic Technology Co., Ltd., Beijing, China) in receiving mode and finally displayed on an oscilloscope (DPO 2014, Tektronix, Dongfang Zhongke Integrated Technology Co., Ltd., Beijing, China). Figure 16. Acoustic transmit-receive system. Figure 16. Acoustic transmit-receive system. Firstly, our PMUT was applied as a transmitter and a 200 kHz commercial ultrasonic transducer (UT, JINCI Technology Co., Ltd., Shenzhen, China) was used as a receiver. The PMUT was tested in different states: a flat state, an up bending state, and a down bending state, which was shown in Figure 17a-c. The ultrasonic transducer was placed opposite to the PMUT and the distance between them was 20 mm. The PMUT was driven by an 80 Vpp burst with a frequency of 195 kHz. The transmitting signal was successfully caught by the ultrasonic transducer and was shown in Figure 17d. From the results, we can see that these curves overlap with each other, indicating that the performance of the flexible PMUT is well preserved under mechanical bending. The amplitude of the signal was 30 mVpp with a voltage gain of 40 dB, and its corresponding Fast Fourier Transform (FFT) exhibited a center frequency of approximately 195 kHz. Sensors 2020, 20, x FOR PEER REVIEW 13 of 16 Firstly, our PMUT was applied as a transmitter and a 200 kHz commercial ultrasonic transducer (UT, JINCI Technology Co., Ltd., Shenzhen, China) was used as a receiver. The PMUT was tested in different states: a flat state, an up bending state, and a down bending state, which was shown in Figure 17a-c. The ultrasonic transducer was placed opposite to the PMUT and the distance between them was 20 mm. The PMUT was driven by an 80 Vpp burst with a frequency of 195 kHz. The transmitting signal was successfully caught by the ultrasonic transducer and was shown in Figure 17d. From the results, we can see that these curves overlap with each other, indicating that the performance of the flexible PMUT is well preserved under mechanical bending. The amplitude of the signal was 30 mVpp with a voltage gain of 40 dB, and its corresponding Fast Fourier Transform (FFT) exhibited a center frequency of approximately 195 kHz. Then the curved PMUT was used as a receiver and the ultrasonic transducer served as a transmitter. The PMUT was mounted on a cylinder with 25 mm curvature radius and wired to a BNC connector through the coaxial line (Figure 18a). The distance between them was also 20 mm. The ultrasonic transducer was excited by a 200 kHz burst with a voltage amplitude of 80 Vpp. The signal Then the curved PMUT was used as a receiver and the ultrasonic transducer served as a transmitter. The PMUT was mounted on a cylinder with 25 mm curvature radius and wired to a BNC connector through the coaxial line (Figure 18a). The distance between them was also 20 mm. The ultrasonic transducer was excited by a 200 kHz burst with a voltage amplitude of 80 Vpp. The signal received by the PMUT was processed in MATLAB and a low-pass filter with the frequency of 300 kHz was applied to reduce the electrical noise. The multiple reflection of acoustic waves between the PMUT and ultrasonic transducer was captured as shown in Figure 18b. Then the curved PMUT was used as a receiver and the ultrasonic transducer served as a transmitter. The PMUT was mounted on a cylinder with 25 mm curvature radius and wired to a BNC connector through the coaxial line (Figure 18a). The distance between them was also 20 mm. The ultrasonic transducer was excited by a 200 kHz burst with a voltage amplitude of 80 Vpp. The signal received by the PMUT was processed in MATLAB and a low-pass filter with the frequency of 300 kHz was applied to reduce the electrical noise. The multiple reflection of acoustic waves between the PMUT and ultrasonic transducer was captured as shown in Figure 18b. The time of flight (TOF) of the first two peaks could be measured from the time-domain curve, which was 121 µs. The distance between the PMUT and ultrasonic transducer can be computed from L = vt/2. Where L is the distance between two objects, v is the sound speed in air at room temperature, which is 340 m/s, and t is the time of flight. Using this function, the distance was estimated to be 20.57 mm, which was approximately consistent with the pre-set value. Conclusions In this work, a flexible PMUT operating in air was successfully designed, fabricated and characterized. Finite element analysis was employed to predict the resonant frequency of composite diaphragms and optimize the structure. A low temperature adhesive bonding technique which aims to minimize the fabrication steps and reduce the costs was used to manufacture the PMUT stably and effectively. The resulting device based on flexural vibration mode has a center resonant frequency of 198 kHz with a wide operational bandwidth. Our device has good conformal contacting with flat, concave and convex surfaces, and survives continuous stretched and compressive bending forces. Furthermore, an acoustic transmit-receive system has been established to demonstrate the acoustic characterization of our device. These experimental results confirm that the proposed PMUT has the potential to be integrated with intelligent devices and wearable electronics.
9,133.6
2020-06-01T00:00:00.000
[ "Engineering" ]
Enhancing machining accuracy reliability of multi-axis CNC machine tools using an advanced importance sampling method The purpose of this paper is to propose a general precision allocation method to improve machining performance of CNC machine tools based on certain design requirements. A comprehensive error model of machine tools is established by using the differential motion relation of coordinate frames. Based on the comprehensive error model, a reliability model is established by updating the primary reliability with an advanced importance sampling method, which is used to predict the machining accuracy reliability of machine tools. Besides, to identify and optimize geometric error parameters which have a great influence on machining accuracy reliability of machine tools, the sensitivity analysis of machining accuracy is carried out by improved first-order second-moment method. Taking a large CNC gantry guide rail grinder as an example, the optimization results show that the method is effective and can realize reliability optimization of machining accuracy. Highlights Abstract Introduction CNC machine tools integrate many technologies, such as accuracy machinery, electronics, electric drag, automatic control, automatic detection, fault diagnosis, and computer. It is a typical mechatronics product with high accuracy and efficiency [22]. Machining accuracy is critical to the quality and performance of machine tools and it is the first consideration of any manufacturer [20]. Machining accuracy reliability is the ability for machine tools can work normally to achieve the corresponding machining accuracy under specified conditions [14]. Its main influencing factors include geometric errors, thermal errors and cutting force errors, etc. Geometric errors and thermal errors are the main influencing factors, accounting for 45%-65% of the total errors. The higher the accuracy of machine tools, the bigger the proportion of geometric errors and thermal errors [12]. When the temperature changes to a stable state, the impact of geometric errors are the largest, accounting for about 40% of the total errors [5]. Large CNC gantry rail grinder has a wide range of travel and is suitable for heavy machinery, ships, and metallurgical equipment. This paper takes it as an example to analyze the relationship between geometric errors of components of the grinder and the reliability of grinding accuracy. The accuracy design of machine tools includes two aspects: accuracy prediction and accuracy allocation [10]. Accuracy prediction refers to the prediction of the volume errors of a machine tool based on the known accuracy of the updated and maintained parts, and then the prediction of the machining accuracy of the workpiece [3]. Accuracy prediction is the basis of accuracy design. Error models are often used to predict the accuracy of machine tools. At present, the methods of establishing a comprehensive error model of machine tools include the matrix translation method, error matrix method, rigid body kinematics, and modeling method based on multi-body system theory [2]. Among them, modeling methods based on MBS theory are widely used, but the calculation amount is large and the process is complicated. In the process of modeling, the ideal position matrices, position error matrices, ideal motion matrices, and motion error matrices of components need to be considered at the same time. To reduce the amount of calculation, a geometric error modeling method based on differential motion relation of coordinate frames is adopted in this paper. By establishing the differential motion matrices between components, the transmission relationship between geometric errors of components and the comprehensive error of machine tools is determined. Accuracy allocation refers to obtaining the accuracy of updated maintenance parts according to the total accuracy preset by the machine tool so that the accuracy of parts can reach the optimal scheme [17]. Its main content is to establish the reliability model of machining accuracy and the sensitivity model of machining accuracy reliability. There are many important methods of reliability and sensitivity analysis such as differential analysis, response surface methodology, Monte Carlo analysis, and variance decomposition procedures [1]. Zhang et al. [21] established the geometric error cost model and geometric error reliability model based on the traditional cost model and reliability analysis model, considering the principle of the weighting function. Then, an error allocation method is proposed to optimize the total cost and the reliability. Cheng et al. [6] developed an error allocation method based on the first-order second-moment method to optimize the allocation of manufacturing and assembly tolerances while specifying operating conditions to determine the optimal level of these errors. Based on Monte Carlo simulation method, the reliability and sensitivity analysis models of machining accuracy for machine tools are given by Cheng et al [8]. The machining accuracy reliability is taken as the index to measure the capability of the machine tools, and the reliability sensitivity is taken as the reference to optimize the basic parameters of the machine tools. The validity of this method is verified by taking a three-axis machine tool as an example. In this paper, the reliability model of machining accuracy is established by updating primary reliability based on an important sampling method, which can determine the reliability of grinding machines at different machining locations. Different geometric errors have different effects on the reliability of machining accuracy of machine tools. How to find and control the key geometric errors effectively is the main problem to improve the machining accuracy [15]. Through sensitivity analysis of machining accuracy reliability, the most critical geometric errors can be identified. Lee and Lin studied the effect of each assembly error term on the volumetric error of a five-axis machine tool according to form-shaping theory [13]. Chen [4] studies the volumetric error modeling and its sensitivity analysis for the purpose of machine design. Cheng [7] considered the stochastic characteristic of geometric errors and used Sobol's global sensitivity analysis method to identify crucial geometric errors of machine tools, which is helpful to improve the machining accuracy of multi-axis machine tools. In this paper, the improved first-order second-moment method is used to establish a sensitivity analysis model, which can identify and optimize the main geometric error parameters that affect the machining accuracy reliability, so that the machining accuracy reliability of machine tools can meet the design requirements. In this paper, the principle of differential motion between coordinate frames is applied to geometric error modeling of machine tools, and a new precision design method is proposed by combining with reliability theory. It has important the-oretical significance and practical value for further study machining precision reliability of machine tools. Differential motion vector in a rigid body or coordinate frame include differential translation vector and differential rotation vector [9]. The differential translation consists of the differential movement of the coordinate frame in the direction of three coordinate axes, and the differential rotation consists of the differential rotation of the coordinate frame around three coordinate axes, then the differential motion vector of the coordinate frame is expressed as: According to the differential motion relation in coordinate system, the differential motion in one coordinate frame can be represented in another coordinate frame. The differential changes relationship between the two coordinate frames can be established by a 6 × 6 transformation matrix, which is the differential motion matrix [16]. Assume that the homogeneous transformation matrix of coordinate frame c relative to coordinate frame d is: Then the differential motion matrix of coordinate frame d relative to coordinate frame c can be expressed as: where (P×) represents the skew-symmetric matrix of vector P. Differential motion matrix reflects the transfer relationship of differential motion between coordinate frames. If the differential motion vector of the coordinate frame d is: Then the differential motion vector of the coordinate frame c caused by the differential motion of the coordinate frame d as follows: The rest of the paper is organized as follows. In Sect. 2, a comprehensive geometric error model of a machine tool is established based on differential motion relationship between coordinate systems. In Sect. 3, a general precision allocation method that includes machine tools reliability prediction and error parameter optimization is proposed. Furthermore, the effectiveness of the method is validated by a large CNC gantry guide rail grinder. The conclusions are presented in Sect. 4. 2. Geometric error modeling of machine tool based on the differential motion relation of coordinate frames Differential motion matrix of a machine tool When the differential transformation between coordinate frames is applied to geometric error modeling of a machine tool, the influence of geometric errors of various parts of a machine tool on machining accuracy can be obtained and geometric error model can be established. Taking a large CNC gantry rail grinder as an example, the geometric error modeling process of this machine tool is presented using differential motion relation of coordinate frames. The basis of geometric error modeling is to obtain the homogeneous transformation matrices between each component of the machine tool. Firstly, the homogeneous transformation matrices of tool relative to any other component are established according to the order of open kinematic chain of the machine tool. The structure of the large CNC gantry rail grinder is shown in Figure 1 and the corresponding topological structure is shown in Figure 2. The order of open kinematic chain is working table -X-axis -Bed -Z-axis -Y-axis -tool. The components of the grinder are regarded as rigid bodies and their local coordinate frames are established. Based on the MBS theory, the homogeneous transformation matrices between the components of the grinder are established. The homogeneous transformation matrices of the working table relative to X-axis, the X-axis relative to bed, the Z-axis relative to bed, the Y-axis relative to Z-axis, and the tool relative to Y-axis are respectively represented as: Then the homogeneous transformation matrix of the bed relative to the X-axis coordinate frame can be indicated as: where x, y, and z denote the moving distances of the X-axis, Y-axis and Z-axis respectively. From the order of open kinematic chain of the grinder, the homogeneous transformation matrices of the tool relative to other parts of the grinder can be obtained, which are expressed as: From equations (3) and (8), the differential motion matrices of each axis of the grinder relative to the tool can be obtained, which are: Error modeling There are 21 geometric errors in the grinder, including three linear errors and three angular errors of each axis, and three squareness errors. In Figure 3, δ xx , δ yx , δ zx represent the linear error of X-axis in the x, y, and z-directions, ε xx , ε yx , ε zx represent the angular errors of X-axis in the x, y, and z-directions. Fig. 3. Basic geometric error of X-axis In Figure 4, S xz , S yz , S xy represent the squareness errors between the X-axis and Z-axis, Y-axis and Z-axis, X-axis and Y-axis. Fig. 4. Distribution of perpendicularity error between triaxial The geometric errors of each part of the grinder can be regarded as the differential motion of each part in its coordinate system. Linear errors are expressed as differential translation, and angular errors are expressed as differential rotation. The six basic errors will change with the motion of the grinder, and the differential motion vector of each component i can be expressed by: There is no geometric error in working table and bed, so the differential motion vectors of the working table and bed are as follows: The squareness errors are an important part of the geometric error of machine tools. In the process of error modeling, the squareness er-rors can be regarded as the angular errors of the corresponding axis. S xy can be regarded as the angular error of the X-axis in the z-direction, S xz can be regarded as the angular error of the Z-axis in the y-direction, S yz can be regarded as the angular error of the Z-axis in the x-direction. Then the differential motion vectors of the Z-axis and Y-axis can be given by: When the grinder is seen as one open kinematic chain, the reference coordinate frame is located on the working table, and the geometric errors direction of the part between the bed and the working table are opposite to the direction defined in the measurement, so the differential motion vector of the X-axis is denoted as: With the differential motion matrices of each part relative to the tool and the differential motion vectors of each part, the differential motion vectors of the geometric errors of each part in the tool coordinate frame can be obtained. By taking Eqs (9) and (15) into Eq (5), the differential motion vector of geometric error of the X-axis in tool coordinate frame can be got as: In the same way, the differential motion vectors of the geometric errors of the Y-axis and Z-axis in the tool coordinate frame can also be obtained: And then, by adding the differential motion vectors of the geometric errors of the components in the tool coordinate frame, the comprehensive error vector of the tool can be obtained, which shows the influence of the geometric errors of the components on the tool coordinate frame: where ΔE T is the comprehensive geometric error model of the grinder in the tool coordinate frame. Reliability modeling of machining accuracy Reliability refers to the ability of a product to complete specified functions under specified conditions and within the specified time. It is one of the most important quality attributes of components, products, and complex systems [11]. Machining accuracy reliability, which reflects the performance of machine tools to maintain machining accuracy, is considered. To reflect the influence of geometric errors of machine tools on the reliability of machining accuracy, in this paper a method of updating the primary reliability with importance sampling method is proposed and the reliability model of machining accuracy of the grinder is given. Compared with the commonly used sampling method, this method can ensure that the shape of limit state surface is taken into account and sampling is processed in important areas. Consider a limit state function Z = g X (X) = g X (X 1 , X 2 , … , X n ) where the random variable X are independent and follow normal distribution, the mean value is μ X = (μ X1 , μ X2 , … , μ Xn ), the variance is σ X = (σ X1 , σ X2 , …, σ Xn ). Let x * be a point on the plane of limit state, then: To calculate the reliability index, we can use a Taylor series expansion of Z = g X (X) at the point x * to linearize the limit state function. The Taylor series expansion is: And the sensitivity coefficient of the variable α Xi is also given by: Transforming basic random variable X space into independent standard normal random variable Y space, the function becomes Z = g Y (Y). The improved first-order second-moment method is used to solve the reliability index β, design point y* and sensitivity vector α Y = [α Y1 , α Y2 , …, α Yn ] T in Y space. Constructing an orthogonal matrix H = [H 1 , H 2 , …, H n-1 , α Y ] from α Y by orthogonal normalization technique. Using H to transform the rotation of Y space into another standard normal variable P space, we can obtain: In Equation (23): The design point in P space is p * = H T y * . The limit state surface g P (P) = 0 is orthogonal to the P n axis at p * . The positive direction of P n axis points to the failure region. Therefore, the function can be expressed as: In the failure domain, P satisfies the following equation: In P space, the failure probability is: can be obtained by sampling the standard normal random variable P  . Therefore, the unbiased estimation of failure probability can be got by: Let I be the maximum allowable error of the grinder, then the limit state function of the grinder is: We can use the value of the limit state function to judge the performance of the grinder. When Z > 0, the machine tool is in a reliable state; otherwise, the machine tool is in an unreliable state. The geometric errors of each part of the grinder are generally considered to be normal distribution and independent of each other, so the importance sampling method to updating reliability is suitable for the reliability modeling of machining accuracy of the grinding machine. Sensitivity analysis of machining accuracy reliability The machining accuracy reliability of machine tools is determined by the distribution types and distribution parameters of all design variables, and the sensitivity of different influencing factors to reliability is very different [19]. In this paper, the sensitivity of the grinder is analyzed by improved first-order second-moment method, and the sensitivity of different geometric error parameters to machining accuracy reliability is determined. The reliability of machining accuracy can be given by the improved first-order second-moment method as follows: The partial derivatives of mean and variance of geometric error are given by: , So, the reliability sensitivity of machining accuracy of various geometric error parameters is obtained. Till now, a precision design method has been put forward. It takes into account geometric errors of machine tools, and includes accuracy reliability model and reliability sensitivity model. Its process is shown in Figure 5. Reliability analysis and accuracy optimization of grinder machining accuracy The geometric errors of various parts of MKW5230A/3×160 large accuracy CNC gantry guideway grinder approximately obey normal distribution, so the 21 of geometric errors are regarded as obeying normal distributions. The variances of the geometric errors are determined by assembly tolerances and geometric tolerances. The mean values of all geometric errors are 0. According to the accuracy of the existing general CNC equipment and the national standard of accuracy testing for gantry guide rail grinder of the people's Republic of China (GB/T5288-2007/ISO4703: 2001), the variances of 21 geometric errors are preliminarily determined [18]. As shown in Table 1. There are three linear errors and three angular errors in the comprehensive geometric error model of the grinder in the tool coordinate frame. As for the angle error, it can be seen from its expression, mean value, and variance that the three angular errors are far less than the allowable errors, which will not be calculated in this paper. In the tool coordinate frame of the grinder, the minimum value of reliability is not less than 95% and the average value of reliability is not less than 97% within the maximum allowable error I = [0.03, 0.03, 0.03] T . In its working stroke, five points of 0, 250, 500, 750, 1000 are selected in x-direction, five points of -1500, -750, 0,750,1500 are selected in y-direction, and five points of 600,800,1000,1200,1400 are selected in z-direction. Using the method proposed in this paper, the machining accuracy reliability of each point can be calculated with Matlab program. In equation (34), P ex is only related to Y-axis and Z-axis coordinates. P ey is only related to X-axis and Z-axis coordinates. P ez is only related to X-axis and Y-axis coordinates. The reliability of machining accuracy in different directions at selected points is shown in Tables 2 ~ 4. It can be seen from Table 2 that the minimum reliability value of 25 machining accuracy items in the x-direction is 86.07%, and the average reliability value is 92.16%. In Table 3, it also can be seen that the sensitivity of geometric error variance in each direction. It can be seen that the geometric errors in the x-direction are ε yx , ε zx , ε zz , S xy . For y-direction, they are ε xx , ε zx , S xy . For z-direction, they are ε xx , ε yx , ε xz , S yz . The reliability of machining accuracy can be improved by adjusting the geometric errors with a higher sensitivity. Tables 5-7 show the results after improvement. From Tables 5-7, it can be seen that the minimum and average values of machining accuracy reliability in x, y and z directions meet the design requirements by optimizing geometric error parameters with high sensitivity. Therefore, it can be concluded that the reliability model and sensitivity model presented in this paper are feasible and effective when the geometric error distribution types and distribution parameters of machine tools are known. The reliability calculation method proposed in this paper incorporates stochastic simulation and statistical analysis, which can solve the reliability problem with high non-linearity. In fact, CNC machine tools is a complex mechanical equipment with a highly nonlinearity. Therefore, this method is more suitable for analyzing the machining accuracy of machine tools. Conclusion In this paper, a general precision design method for CNC machine tools is proposed. The method takes average value and minimum value of machining precision reliability as constraints, and combines sensi-the minimum reliability value of 25 machining accuracy items in the y-direction is 93.45%, and the average reliability value is 96.75%. In Table 4, it also can be seen that that the minimum reliability value of 25 machining accuracy items in the z-direction is 91.21%, and the average reliability value is 95.08%. The minimum and mean values of reliability in all directions do not meet the design requirements. The reliability sensitivity analysis method based on the improved first-order second-moment method is used to determine the geometric error parameters. Because the mean value of each geometric error is 0, only the geometric error variance is analyzed. Figures 6-8 shows error modeling method based on the differential motion relation between coordinate frames has less calculation and can clearly explain the geometric meaning of the geometric error of each part to the total error. Based on the comprehensive error model and advanced im-2. portance sampling method, the accuracy reliability model and reliability sensitivity model of machine tools are given to optimize the machining accuracy reliability of machine tools. The effectiveness of the method proposed in this paper is val-3. idated by a large CNC gantry guide rail grinder, the results show that the machining accuracy reliability of the machine tool can be improved.
5,139.4
2021-07-07T00:00:00.000
[ "Materials Science" ]
Research on the Voiceprint Recognition Based on Bp-Ga Algorithm In order to improve the performance of voiceprint recognition system, the paper proposes to use BP neural network-genetic algorithm (BP-GA) in voice recognition. The algorithm can overcome the problem that the traditional multi-layer artificial neural network is easy to fall into local minima when it is trained by genetic algorithm. The experimental results show that the BP-GA algorithm has the advantages of faster recognition rate, higher recognition rate and lower error rate, automatic error correction and robustness for different speakers, than the traditional recognition algorithms (LPCC, MFCC, etc.). Introduction With the rapid developing of network technology, information security is becoming more and more important. Conventional password authentication has revealed some shortcomings in using of information network. However, the technology of biometrics has become increasingly mature and has shown its superiority in practical application. Among them, voiceprint recognition is a new recognition technology developed in recent years. Compared with other biometrics, voiceprint recognition has many advantages such as simple, accurate, economical and non-contact identification. BP-GA algorithm is proposed to identify voiceprint in the paper. Compared with the traditional recognition algorithm, it has the advantages of fast recognition, high recognition rate, low error rate, automatic error correction and robustness for different speakers. Vector processing for voice wavelet transform As BP-GA recognition algorithm identifies digital variables, the voice signal must be digitally vector-preprocessed. Wavelet analysis is a new concept of time-frequency analysis method, with a variable resolution, reflecting non-stationary transient changes accurately, anti-noise interference ability and other advantages. It can better reflect the dynamic information of the voice signal, and has the advantages of simple implementation and low computational complexity. And can not only fully reflects the auditory characteristics of the human ear but also accurately reflects the dynamic characteristics of the voice signal, so as to improve the final recognition rate of the voiceprint. For any one-dimensional voice signal, noisecontinuous wavelet transform (CWT) is defined as the inner product of signal and wavelet basis function, that is, Similar to the short-time Fourier transform, the original signal can be recovered from the wavelet transform of the known voice signal, which is called wavelet inverse transform reconstruction. The inversion formula can be expressed as: It is assumed that a one-dimensional voice signal containing noises can be expressed as follows: Where f(i) is the true voice ramp signal, e(i) is a Gaussian white noise or other noise signal, s(i) contains the noise signal and the useful low frequency signal to be extracted. The purpose of signal extraction is to extract the useful from the signal with noise so as to recover the true slowly changing signal f(i) in s(i). In practical engineering, the voice signal usually presents as a relatively stable signal, while the noise signal usually appears as a higher frequency signal. Therefore, we choose a wavelet basis and determine a wavelet decomposition level, and then decompose the signal s into N levels. Considering the actual conditions and the amount of computation, we use n = 3. As shown in Figure 1. Bp-Ga Algorithm BP-GA algorithm is a new identification method in recent years, with high efficiency, strong recognition ability [3]. The genetic algorithm neither depend on the gradient information, nor require the objective function to be continuous, and may not even need the expression of the objective function. As the combination of artificial neural network and genetic algorithm not only solves the problem of low efficiency and long time consuming in BP neural network, but also plays a global solver of genetic algorithm, so it is an effective and feasible method for identifying. The specific realization of the process shown in Figure 2. BP neural network construction for voiceprint recognition BP multi-layer forward-forward neural networks divide the network into several layers, and the layers are arranged in sequence. The neurons in layer i only receive the signal given by the neurons in layer (i-1). Neurons in each layer have no feedback. When a vector x is input to a forward network, a vector y is output after passing through the network. Therefore, the forward neural network can be regarded as an inverter that completes the mapping from x to y. In specific applications, this paper uses a 3-layer BP network. The first layer of the input vector should be adjusted according to the actual situation. The first layer is the normalization layer, the input vector is [Q1, Q2, Q3, Q4]. The second layer is the BP network input layer, corresponding to take five nodes. The third layer is the output layer, take a node, the output characteristic function take s-type function, the output value is the credibility of the voiceprint identification, where the continuous number (0,1) interval. 1 ij ω is the input layer weight(i=1, 2, 3; j=1, 2, 3), The input layer node is Design for Genetic Algorithms The genetic algorithm includes five basic elements: coding, initial population, fitness function, genetic operation, parameters control and termination rules. Coding is a bridge connecting problems and algorithms. In order to facilitate genetic search in a large space and improve the accuracy of the algorithm, floating-point encoding method are taken. Each individual in the initial population consists of a 40-bit binary string, each of which is generated as follows: A random number between (0, 1) is generated, and if this number is greater than 0.5, the bit code is 1, otherwise 0. The current method is based on the conventional penalty function idea in the optimization method. Since the paper is the error minimum optimization problem, the fitness function can be expressed as following. Genetic manipulation includes selection, crossover, mutation, and group updating in 4 parts. (2) Crossover: Choose a uniform crossover method, with mask samples generated in a random manner. (3) Variation: Random selection of mutation bits, to flip the bit value. In this paper, we use two termination rules. When the difference between the maximum value and the minimum value of the objective function in the group is less than the given accuracy of 1e-4, the algorithm is considered to have converged to terminate the program. Otherwise, set the maximum generation to 100, and terminate the program when the number of iterations reaches the value. Figures/Captions In this case, the comparative experiments are been done by using this recognition algorithm and BP neural network respectively, with the eigenvectors of 100 speaker's voice in the voice database. The object output is 1, the counterfeiter is 0, while training with the identified criterion of 0.5. In the genetic algorithm program In MATLAB 7.0 environment, the initial group of genetic algorithm is 100, cross probability is 0.51, and mutation probability is 0.032. In the BP neural the tansig transfer function is used by hidden layer, the satlin transfer function is used by output layer, a single output type is used as output in order to improve training speed. The general computer configuration is: CPU Pentium M, clocked at 1. 4 GHz, memory 512 M, Windows XP. Figure 3 BP neural network training results Calculated by MATLAB neural network tools, the final relative recognition error rate for the trained BP neural network is about 1.6%, as shown in Fig.3. The specific test results are shown in Table 1. Select the traditional LPCC recognition algorithm as a contrast, the experimental results of specific tests shown in Table 1. Conclusion BP neural network has some intelligence, can gain experience from errors, and improve recognition performance. Genetic algorithm can improve the learning speed, convergence rate of BP neural network, and shorten the time spent on training. Compared with the traditional recognition algorithm (LPCC, MFCC, etc.), the BP-GA algorithm has the advantages of recognition speed, high recognition rate, low error rate, automatic error correction and robustness to different speakers.
1,801.8
2019-01-01T00:00:00.000
[ "Computer Science" ]
A Proteome Comparison Between Physiological Angiogenesis and Angiogenesis in Glioblastoma The molecular pathways involved in neovascularization of regenerating tissues and tumor angiogenesis resemble each other. However, the regulatory mechanisms of neovascularization under neoplastic circumstances are unbalanced leading to abnormal protein expression patterns resulting in the formation of defective and often abortive tumor vessels. Because gliomas are among the most vascularized tumors, we compared the protein expression profiles of proliferating vessels in glioblastoma with those in tissues in which physiological angiogenesis takes place. By using a combination of laser microdissection and LTQ Orbitrap mass spectrometry comparisons of protein profiles were made. The approach yielded 29 and 12 differentially expressed proteins for glioblastoma and endometrium blood vessels, respectively. The aberrant expression of five proteins, i.e. periostin, tenascin-C, TGF-beta induced protein, integrin alpha-V, and laminin subunit beta-2 were validated by immunohistochemistry. In addition, pathway analysis of the differentially expressed proteins was performed and significant differences in the usage of angiogenic pathways were found. We conclude that there are essential differences in protein expression profiles between tumor and normal physiological angiogenesis. Neovascularization is a complex process taking place under physiological and pathological circumstances. There is large overlap in the cellular components, regulatory factors, and signaling mechanisms acting in the angiogenic process of regeneration, embryonic development, and tumor vascularization (1). In both physiological and pathological neovascularization signaling mechanisms, growth factors and their receptors, cell adhesion molecules and their specific extracellular matrix ligands take part (2). Angiocrine molecules like basic fibroblast growth factor and vascular endothelial growth factor and their receptors have been identified in the context of physiological and tumor angiogenesis (3). It is believed that the angiogenic switch is always triggered by hypoxia, starting with the up-regulation of vascular endothelial growth factor. The process of neovascularization consists not only of sprouting angiogenesis, but also of activation of endothelial precursor cells with the capacity to form de novo blood vessels (vasculogenesis). Vasculogenesis is also part of tumor vascularization but there is dispute about its relative importance (4,5). Despite the similarities there are obvious differences between physiological vascularization and angiogenesis in tumors. Under physiological conditions, the regulatory mechanisms are well-coordinated and balanced and endothelial cell functions are tightly orchestrated by both pro-and anti-angiogenic factors. In tumor angiogenesis however, there is an excess of pro-angiogenic factors leading to uncoordinated proliferation and tubulogenesis of endothelial cells and migration of mural cells like pericytes (6). It is likely that the molecular differences between physiological angiogenesis and neovascularization in tumors are mainly at the level of regulation of pathways and overexpression of particular proteins. In order to identify proteins that are specifically expressed in tumor angiogenesis, comparisons with protein profiles of blood vessels in which active normal angiogenesis take place are necessary. Physiological angiogenesis occurs in adults during the menstrual cycle and in repair or regeneration of tissue during wound healing (7). Therefore, we included blood vessels from proliferating endometrium (representing physiological angiogenesis) in the present analysis. A better model for tumor angiogenesis than that taking place in glial neoplasms is hardly imaginable and therefore, we implicated the blood vessels of glioblastomas in this study. Of all tumors, gliomas are among the most vascularized ones. Most glial tumors develop from low-grade, relatively benign neoplasms into high-grade tumors. Glioblastomas (or glioblastoma multiforme; GBM 1 ) are gliomas of the highest malignancy grade. These tumors are the most frequently encountered primary brain tumors in humans. GBMs are highly infiltrative tumors that show rapid clinical progression. Most patients succumb in less than a year after the diagnosis is made. In contrast to their low-grade counterparts, GBMs show high proliferation and cell density, and notorious microvascular proliferation and necrosis (as a sequel of the bad quality of the blood vessels) (8). It is believed that angiogenesis, the formation of new blood vessels from pre-existing vasculature, not de novo vasculogenesis from individual cells, is the dominant mechanism in the development of tumor vasculature (9,10). The aim of the present investigation was to elucidate differences between regenerative (physiological) angiogenesis and angiogenesis in neoplasms at the protein level. The identification of differences in protein expression patterns are of paramount clinical importance: in some situations the formation of new blood vessels should be stimulated, whereas in others the main goal is to repress neovascularization. To this end, we microdissected the blood vessels from GBM and proliferating endometrium by using laser capture microdissection. The microdissected blood vessel subsets were analyzed by nano liquid chromatography LTQ Orbitrap mass spectrometry. Differentially expressed proteins detected in either group were characterized and linked to molecular pathways. In addition, a selection of differentially expressed proteins was validated by immunohistochemistry. MATERIALS AND METHODS Patient Samples-Ten fresh-frozen surgical samples of GBM located in the cerebral hemispheres were taken from the files of the Department of Pathology, Erasmus MC, Rotterdam, The Netherlands. Five patients were male. The ages ranged between 41 and 86 years (median 67.5 years). In addition, 10 fresh-frozen endometrium samples were collected at the Department of Gynecology, Erasmus Medical Center, Rotterdam, The Netherlands. The samples were collected from premenopausal women who had undergone hysterectomies for diseases not involving the endometrium. Their ages ranged between 39 and 46 years (median of 43 years). Sections of 5 m from each sample were counterstained and examined by a pathologist (JMK) to verify the presence of blood vessels (Fig. 1). The use of all samples was approved by the Medical Ethical Committee of the Erasmus Medical Center Rotterdam, The Netherlands. Laser Capture Microdissection-The procedure was as previously described (12). Briefly, cryosections of 10 m were made from each sample and mounted on polyethylene naphthalate covered glass slides (P.A.L.M. Microlaser Technologies AG, Bernried, Germany). The slides were fixed in 70% ice-cold ethanol for a maximum of 2 h. Before laser microdissection, the slides were stained with hematoxylin and dehydrated in a series of ethanol solutions and left to air-dry for 5 min. The P.A.L.M. laser microdissector and pressure catapulting device, type P-MB, was used with PalmRobo version 2.2 software at 40 ϫ magnification. An area of 200,000 m 2 was microdissected from each sample, resulting in ϳ2000 cells/sample (estimated cell volume: 10 ϫ 10 ϫ 10 m). Altogether, four groups of samples were laser microdissected: glioma blood vessels, glioma tissue surrounding the blood vessels, endometrium blood vessels, and endometrium tissue surrounding the blood vessels. In addition, a corresponding sized area of the polyethylene naphthalate membrane was laser microdissected and used as a negative control. As internal quality control, three sections of a glioma sample without microdissection were collected. The preparation and analysis of the internal quality controls was similar to that of the test samples. Sample Preparation-The procedure for sample preparation was as previously published (12). For laser microdissection, 7 l of 0.1% RapiGest (Waters, Milford, MA) in 50 mM NH 4 HCO 3 pH 8.0 was used to collect the laser microdissected tissue areas. Protein LoBind, 0.5-ml Eppendorf tubes (Eppendorf, Hamburg, Germany) containing the samples were stored at Ϫ80°C until the time of preparation. After thawing the samples, 15 l of RapiGest solution were added to each tube to bring the final volume of each sample to 22 l. The cells were disrupted by external sonication for 1 min at 70% amplitude at a maximum temperature of 25°C (Branson Ultrasonics, Danbury, CT). For protein solubilization and denaturation, the samples were incubated at 37 and 100°C for 5 and 15 min, respectively. To each sample, 1.5 l of 100 ng/l gold grade trypsin (Promega, Madison, WI) in 3 mM Tris-HCl diluted 1:10 in 50 mM NH 4 HCO 3 was added and incubated overnight at 37°C. To inactivate trypsin and to degrade the RapiGest, 3 l of 25% trifluoroacetic acid were added and samples were incubated for 30 min at 37°C. Samples were centrifuged at maximum speed for 15 min at 4°C and the supernatant was transferred to LC-vials (Waters, Milford, MA) and measured by nanoLC Orbitrap mass spectrometry. The internal quality control sample was diluted at 1:400 using LC-MS grade water in order to obtain a comparable peptide concentration to that of the laser microdissected blood vessel samples. The internal control sample was prepared with the other samples and measured at regular intervals (every five samples) over the measurement period of about 30 days. LTQ Orbitrap Measurements-Nano LC-MS measurements were carried out on an Ultimate 3000 nano LC system (Dionex, Germering, Germany) online coupled to a hybrid linear ion trap/Orbitrap MS (LTQ Orbitrap XL; Thermo Fisher Scientific, Bremen, Germany). The total volumes of the digested samples (ϳ20 l) were loaded onto a C18 trap column (C18 PepMap, 300 m ID ϫ 5 mm, 5 m particle size, 100 Å pore size; Dionex, Amsterdam, The Netherlands) and washed for 10 min using a flow rate of 25 l/min of 0.1% trifluoroacetic acid. The trap column was switched in-line with the analytical column (PepMap C18, 75 m ID ϫ 150 mm, 3 m particle and 100 Å pore size; Dionex, Amsterdam, The Netherlands) and peptides were eluted with the following binary (A and B) gradient: 0 -25% solvent B in 120 min; 25-50% solvent B from 120 -180 min; solvent A consists of 2% acetonitrile and 0.1% formic in water and solvent B consists of 80% acetonitrile and 0.08% formic acid in water. The column flow rate was set to 300 nL/min. For MS detection a data dependent acquisition method was used: a high resolution survey scan from 400 -1800 Da was obtained in the Orbitrap (value of target of automatic gain control 10 6 , resolution 30,000 at 400 m/z; lock mass was set to 445.120025 u (protonated (Si(CH 3 ) 2 O) 6 ) 1 . Based on this survey scan the five most intensive ions were consecutively isolated (automatic gain control target set to 10 4 ions) and fragmented by collision activated dissociation applying 35% normalized collision energy in the linear ion trap. After precursors were selected for MS/MS, they were excluded for 3 min from fragmentation. Samples were prepared and measured in a randomized way. We measured the internal quality control sample once in every five measurements. Protein Identification-From raw mass spectrometer data files, tandem MS (MS/MS) spectra were extracted by Mascot Deamon (Matrix Science, London, UK) version 2.2.2 using the Xcalibur extract msn tool (version 2.07) into mgf files. All mgf files were analyzed using Mascot. The set-up was designed to search the UniProt (release 15.6) database. The Mascot search engine was used with fragment ion mass tolerance of 0.5 Da and a parent ion tolerance of 10 ppm. Oxidation of methionine was specified in Mascot as a variable modification. A minimum ion score of 25 was required for identification. Scaffold software (Version, 2_05_01), [www.proteomesoftware.com] (Portland, OR), was used to summarize and filter MS/MS based peptides and protein identifications. Peptide identifications were accepted if they exceeded a peptide probability of 95.0%. Protein identifications were accepted if protein probability exceeded 99.0% and at least two peptides were identified. Proteins that contained similar peptides and could not be differentiated based on MS/MS analysis alone were grouped. Label-Free Quantitation-Scaffold was used to generate a data file of the identified proteins including the number of sequenced peptides (spectral counts) that were found in each sample. On this data file we performed a Significance Analysis of Microarrays (SAM, Stanford University) version 3.1 and performed a multiclass comparison with false discovery rate of 5%. In the SAM analysis the relative frequency of occurrence of each protein in the four groups of samples was calculated. The relative frequency of occurrence indicates the mean differences between the protein measurements in each class, versus the mean of all measurements for the particular protein. Therefore, the positive values mean that the protein was measured in that class more than the average measurements of that protein in all the other classes. The negative values mean that the protein was measured in that class less than the average measurement of the same protein in other classes. The proteins that were significantly expressed in either one of the blood vessel groups as compared with the other three groups were taken as differentially expressed. In addition the data was analyzed using the Progenesis LC-MS Software package (Version, 2.5), Non Linear Dynamics, New Castle UK. We aligned all blood vessel samples of glioma and endometrium using one of the glioma blood vessel samples as reference. Identification was added to the Progenesis result file by performing a Mascot database search with above described parameters. The statistical analyses of the data was performed using Progenesis and Partek® Genomics Suite™ version 6.09.0129 software [http://www.partek. com/]. The zero values in the data matrix obtained by Progenesis were removed and the distribution of the normalized abundance values was log2-transformed and subsequently an ANOVA analysis was performed in Partek® Genomics Suite™. Pathway Analysis-The lists of differentially expressed proteins in glioma blood vessels derived from the data analysis were combined and uploaded into the Ingenuity IPA system version, 7.1 [www. ingenuity.com] to generate biological networks relating to glioma angiogenesis. In addition, we uploaded the lists of differentially expressed proteins from the endometrium blood vessels into the Ingenuity IPA system to generate biological networks relating to physiological angiogenesis. Subsequently, we compared the biological networks of the two groups in order to search for differences in the regulation of proteins in the pathways. Validation by Immunohistochemistry-Eight proteins were validated by immunohistochemistry on formalin-fixed, paraffin-embedded tissue sections; basement membrane-specific heparan sulfate proteoglycan core protein (P98160), integrin alpha-V (P06756), laminin subunit beta-2 (Q6PCB0), collagen alpha-1 (XVIII) chain (P39060), integrin-linked protein kinase (Q13418), tenascin C (P24821), transforming growth factor-beta-induced protein ig-h3 (Q15582), and periostin (Q15063) ( Table I). To investigate expressional variation between the two investigated vessel groups in independent tissue samples, ten additional samples from each group of glioma, endometrium, and normal brain were immunostained. Further, in order to determine the specificity of the above mentioned proteins to glioma angiogenesis, a series of other gliomas, carcinomas, vascular malformations, reactive conditions in which angiogenesis takes place, normal brain samples and placentas were tested for the presence of these proteins. More details about these samples are available in Table II. Immunohistochemical staining was performed following the manufacturer's procedures (alkaline phosphate technique) on 5-m paraffin sections. We followed the same procedure as previously described (15). The internal quality control was measured seven times at equal intervals during the measurement period of the complete sample set. The average CV of the spectral counts of the identified proteins in the sample was 20.8% and the average number of the identified proteins was 206 with a standard deviation of 16. No significant changes in the number of identified proteins were observed over the period of time in which measurements were repeated for internal quality control samples indicating that both the experimental set-up and the protein digests were stable over the measurement period of 30 days. The number of identified proteins did not change significantly during the measurement period of 30 days. In total 694 proteins were identified in the four sample groups. The spectral counts of all identified proteins as observed in the Scaffold software were used in SAM analysis with a false discovery rate of 5% resulting in 152 differentially expressed proteins. We categorized these 152 differentially expressed proteins based on their relative frequency of occurrence (SAM analysis) in the sample groups. We considered a protein to be specific for a particular sample group if its occurrence was at least twice the occurrence encountered in any of the other groups. Out of the 152 differentially expressed proteins 29 were found in the GBM blood vessel group (Table IIIA) whereas 12 were found in the endometrium blood vessel group (Table IIIB). The raw spectral counts as obtained via Scaffold were also compared with Progenesis. Successful alignment of the data derived from the blood vessels from the endometrium and GBM was achieved. By using the Progenesis results, 18 out of the 41 proteins appeared to be differentially expressed (ANOVA; p Ͻ 0.05) between the sample groups. (3-B ϭ Table IIIB), The significant proteins in the ANOVA analyses are indicated in 3-B ϭ Table IIIB. Pathway Analysis-The list of 29 proteins, which were significantly up-regulated in GBM blood vessels, was uploaded in IPA and mapped against the database. IPA could map all proteins in three different networks. The first matched network designated as "tissue development and cell-to-cell signaling" had a score of 50 and contained 20 of the identified proteins. Ten of the 20 proteins were associated with a function designated as "cardiovascular system development and function" and five proteins, namely: collagen alpha-1(XVIII) chain, laminin subunit alpha-5, laminin subunit gamma-1, fibronectin, and integrin alpha-V appeared to be related to angiogenesis. In addition, the 12 differentially expressed proteins identified in the endometrium blood vessels were uploaded in IPA and were mapped to four different networks. The first matched network called "tissue development, embryonic development" contained 11 proteins. Nine proteins showed a direct relation with the function called "Cardiovascular System Development and function" and three proteins (caveolin-1, myosin-Ic, and protein kinase C delta-binding protein) appeared to be related with proliferation of endothelial cells. At the level of molecular and cellular functions, the differentially expressed proteins that were identified in the glioma blood vessels were associated with cell-to-cell signaling, cellular movements, and cell morphology. In contrast, the differentially expressed proteins in the endometrium blood vessels were associated with molecular transport and excretion of proteins. Immunohistochemistry-Eight proteins were selected for immunohistochemical validation based on the availability of antibodies (Table 3A). Six of these proteins had been significantly emerged from both the SAM and the ANOVA analysis. Integrin-alpha V and integrin-linked protein kinase were selected based on their relation to angiogenesis (12, 13) (Supplemental file S1 and S2). The overexpression of five out of eight proteins in glioma blood vessels compared with endometrium vessels was confirmed by immunohistochemistry. The proliferated blood vessels of GBM samples were immunopositive for periostin, TGF-␤ induced protein ig-h3, integrin-alpha V, tenascin C, and laminin (Fig. 2). The other three proteins, e.g. basement membrane-specific heparan sulfate proteoglycan core protein, proteins collagen alpha-1 (XVIII) chain, and integrinlinked protein kinase, were not present in the GBM blood vessels only, but also found in the endometrium blood vessels by immunohistochemistry. The results of the immunostaining of various glioma types showed some variation in expression of the above mentioned TABLE III A: Differentially expressed proteins in glioma angiogenesis based on the spectral counts of each protein after performing SAM analyses. p values ϭ ANOVA of peptide abundances in Progenesis of GV and EV, ¥ indicates the proteins that were selected for IHC validation, *** represents a significant p value (p Ͻ 0.05), ND ϭ Not detected in the Progenesis analyses. GV ϭ glioma blood vessels, GT ϭ glial tumor tissue, EV ϭ endometrium blood vessels, ET ϭ endometrial glands and stroma, RFO ϭ the relative frequency of occurrence, the values rank each protein in each class. The RFO indicates the mean differences between the protein measurements in each class, versus the mean of all measurements for the particular protein. Therefore, the positive values ϭ the protein was measured in that class more than the average measurements of that protein in all the other classes. The negative values ϭ the protein was measured in that class less than the average measurement of the same protein in other classes. The color grade represents the frequency of occurrence ( B: Differentially expressed proteins in endometrium angiogenesis on the spectral counts of each protein after performing SAM analyses. p values ϭ ANOVA of peptide abundances in Progenesis of GV and EV, *** represents a significant p value (p Ͻ 0.05), ND ϭ Not detected in the Progenesis analyses. GV ϭ glioma blood vessels, GT ϭ glial tumor tissue, EV ϭ endometrium blood vessels, ET ϭ endometrial glands and stroma, RFO ϭ the relative frequency of occurrence, the values are the means to rank each protein in each class. The RFO indicates the mean differences between the protein measurements in each class, versus the mean of all measurements for the particular protein. Therefore, the positive values ϭ the protein was measured in that class more than the average measurements of that protein in all the other classes. The negative values ϭ the protein was measured in that class less than the average measurement of the same protein in other classes. The color grade represents the frequency of occurrence (SAM analysis), dark green is the highest, dark red is the lowest and the different grade of yellow/orange indicates all the in between values. proteins in the blood vessels, depending on the stage of proliferation; high expression in young sprouts, low in sclerotic, abortive vessels. The expression of each protein was constant among the ten different samples that were tested in each group. For comparison of the results of immunohistochemistry, we included other nonglial tumors and reactive conditions. The results of immunostaining of carcinomas, vascular malformations, reactive conditions, angiogenesis in placentas, and normal blood vessels in normal brains varied and are summarized in Table IV and Fig. 3. DISCUSSION Although significant progress has been made in the field of proteomics over the last two decades, the identification and quantification of the entire proteome of tissue samples is not possible. The analysis of complex protein mixtures is currently one of the most challenging subjects in the area of proteomics. The normal biological variation in protein expression among cells of identical lineage is around 15% to 30% (15). Under situations of stress or in tumors, an even higher variation of protein expression is expected (15,16). A major source of this expressional variation is caused by differences in the structure and function of different cell populations present in individual samples. The complexity of human biopsy samples is still a considerable obstacle in proteomics analyses (17). Reduction of the complexity of protein mixtures can be reached at various levels. First, at the level of the tissue, methods of sample purification applied prior to analysis may well improve the accuracy of detection and increase the chance to identify low abundant proteins (18). By laser capture microdissection particular microscopic structures (like blood vessels) can be targeted and isolated and therefore reducing the chance of averaging out proteins of interest. Further, there are methods of enhancing the detection of proteins in the proteomics technology using very low numbers of cells. In several studies, it was proven that the use of advanced methods of fractionation prior to measuring the peptide digests in any of the mass spectrometers increases the number of identified proteins (12, 18, 19 -20, 21). In order to reduce the complexity of the samples in the present study and enrich for blood vessels in our glioblastoma and endometrium blood vessel samples, laser microdissection was applied to separate the blood vessels from the surrounding tissue. Additionally, the samples were fractionated in a solvent gradient using nano liquid chromatographic separation online coupled with a mass spectrometer. The rapid scan rate, high mass accuracy and sensitivity of the LTQ Orbitrap assisted in the identification of relatively large numbers of proteins from small numbers of cells of no more than 2000 cells (ϳ270 ng total protein, estimating that one cell has ϳ135 pg total protein). Because of the relatively large differences between the tissue types used in this study, alignment of all data was not possible and label-free comparison for Progenesis analysis was only possible for the groups of microdissected blood vessels. The spectral counts of all 694 identified proteins were taken into consideration for SAM analysis in which comparisons of the four groups were made. Proteins that were relatively up-regulated in either glioblastoma blood vessels or endometrium blood vessels were identified. The combination of both approaches resulted in a list of reliably differentially expressed proteins. A portion of the proteins that emerged from both analytic strategies was successfully validated by immunohistochemistry. The results of immunohisto- Table VI). The overexpression of periostatin, TGF-beta, integrin alpha V, tenascin C, and laminin in the vasculature of GBM and also in pilocytic astrocytoma, ependymoma, and anaplastic oligodendroglioma row 1, 3-5. The sample set was extended with a nonglial tumor (renal cell carcinoma) of which the vasculature was variably immunopositive for the various proteins (sixth row). The nonneoplastic arteriovenous malformation (AVM) and cavernous hemangioma (CH) stained positive for all proteins tested, except for immunonegativity for laminin of the AVM. Remarkably, the blood vessels in ischemic brain tissue were immunopositive for all proteins found in the glioma vessels. These results were constant among all the samples that were used in each type of tissue. chemical validation proved that the combination of laser capture microdissection and subsequent nano-LC Orbitrap mass spectrometry is a powerful approach to find tissue specific proteins. Nevertheless, some identified proteins were immunohistochemically detected in glioblastoma as well as endometrium blood vessels. It may well be that the differences detected by the LTQ Orbitrap mass spectrometry did not always match the discriminative power of immunohistochemistry. Further, there may be issues of the specificity of particular antibodies. The blood vessels in GBM display phenotypical changes ranging from incipient proliferation of endothelium to sarcomatous vascular structures and these changes can all be encountered in the vasculature of the same tumor. The endothelial, pericytic and smooth muscle cells take various spatial positions within the vascular walls depending on the size of the proliferated vessel (22). The endothelial cells show increased numbers of caveolae and fenestrations, prominent pinocytotic vesicles, diminished numbers of tight junctions, leading to leakage and disruption of the blood-brain barrier functions (4). In a previous study, we identified several proteins that were specifically up-regulated in glioma vasculature although not in resting blood vessels of normal brain (12,23). In the present study we did not compare the glioma blood vessels with the resting vessels of normal brain, but with vessels taken from tissue in which active angiogenesis takes place, i.e., endometrium vessels instead. We found specific up-regulation of 29 proteins in the GBM. All proteins are known to take part in angiogenic pathways. Because these proteins are expressed at significantly lower levels in the endometrium samples, we consider them as characteristic of angiogenesis in neoplasia. Fourteen of the 29 proteins appear to be structural proteins; two are integrins and five are enzymes. Periostin is an extracellular matrix protein that is involved in cell adhesion (24). It regulates cell function and cell-matrix interaction but is not a component of the basal lamina itself. Periostin was found in the blood vessels of nonsmall cell lung cancers and plays a critical role in cardiac remodeling (25,26). Recently, Kii et al. proved that periostin promotes the incorporation of tenascin-C into the ECM and mediates the formation of the meshwork architecture of the ECM (27). Tenacin-C was also found overexpressed in the GBM blood vessels in the present study. Tenascin-C is an extracellular matrix protein that participates in normal fetal development and wound healing (28). In GBM its presence correlates well with microvascular density (28). Tenascin-C mediates vascular endothelial growth factor expression (VEGF) (29), which is up-regulated and induced by hypoxia inducible factor (HIF-1) under hypoxic conditions (30). The finding of the specific expression of integrin alpha V in glioma angiogenesis in this study is also of interest because this protein serves as a receptor for the extracellular matrix. The relation of integrin alpha V and angiogenesis has been described in several types of cancer among which cervical can-cers (31), ovarian carcinoma (32), breast cancer (33), and melanoma (34). We also found overexpression of transforming growth factor-beta-induced protein ig-h3 in the GBM blood vessels (35). TGF␤ regulates the expression of various proteins TGF␤ induced protein ig-h3, periostin, integrin alpha V, tenascin-C, fibronectin, colligin 2, caldesmon, acidic calponin, and basement membrane-specific heparan sulfate proteoglycan core protein (35,36). Further investigations of the interrelationships between these proteins may reveal their relative importance and whether they should be considered potential targets for therapeutic anti-angiogenic interventions.
6,469.4
2012-01-25T00:00:00.000
[ "Biology", "Medicine" ]
Diagnose Disease Expert System Respiratory Tract Infection Method Using Certainty Factor — Respiratory tract infections are infectious diseases that interfere with the process of human breathing. When the breathing process takes place, there are often various kinds of diseases, most of which can only be treated by a lung specialist. The arrival of a pulmonary specialist for consultation can take hours and is expensive. Then we need an expert system that can quickly find out the type of disease in human breathing and how to handle it and the solutions that will be provided. Expert system is a system that uses human knowledge to find out the system that is entered into a computer and then is used to solve problems that usually require expertise or human expertise. One application of an expert system to diagnose respiratory tract infections is to use the certainty factor method. The certainty factor method is a method used to solve problems from uncertain answers, and also produce uncertain answers. This uncertainty is influenced by two factors, namely uncertain rules and uncertain user answers. The research aims to build an expert system application for handling respiratory tract infection problems with Visual Studio 2010 as a tool for designing applications and using Microsoft Access 2007 Database as a database. This expert system is able to calculate similarity in weight calculation based on symptoms of respiratory tract infection using certainty factor methods and provide reports using crystal reports. Introduction ISPA or pernapasanmerupakan tract infections, infections that interfere with human respiratory process. In general, the infection is caused by a virus that attacks the trachea (breathing tube), the nose, even the lungs. In this study the author discusses Disease pernapasankarena respiratory tract infection is a window that can assist and detect abnormalities or other diseases in the human body in because of problems that arise in Breathing can reflect the health condition of the human body. Therefore, it is better to recognize the symptoms and signs of illness through disease caused by respiratory tract infections in humans. Therefore, for no fault to make diagnosis in humans, the authors make the application of expert systems to diagnose diseases of respiratory tract infections to simplify and to determine disease dideritaoleh manusia.oleh therefore order not too late to get treatment, because a doctor or specialists have limited time for consultation to the patient, to the authors build an expert system for diagnosing respiratory infections by using certainty factor that may help resolve the issue. Methods certainty factor (CF) is a method that can define the size of certainty and uncertainty of a fact or a rule set. This research will be designed an expert system to detect the type of disease Ispa. Due to the existence of this system, the public can find out in detail the symptoms of disease in the human respiratory infection and ways to overcome them before consulting a doctor. The system is designed based expert system using the method of Certainty Factordengan uses Visual Studio 2010, and data storage used is using Microsoft Access 2007. The end result of this application form types of illness based on the symptoms that have been selected. With the Ispa disease diagnosis expert system is expected to know in detail the symptoms of disease in humans Ispa and how to overcome them. Theory a. Expert system The expert system is an artificial intelligence program that combines the knowledge base to base inference system to mimic an expert. The expert system is a system that tried to adopt human knowledge into a computer so that the computer can resolve the issue as was done by experts. It is expected that with this expert system, users can solve specific problems, without the help of experts in the field. An expert system is a computer system that sow (emulates) the ability of decision-making from an expert. Emulatesberarti term expert system is expected to work in all respects like an expert. An emulation is much more powerful than a simulation only takes something that is evident in several fields or things. Part of the expert system is comprised of two main components which contain baseyang knowledge Knowledge and Inference engine that draws conclusions. The conclusion of the expert system is in response to user requests. [1] b. certainty Factor Factor MetodeCertainty proposed by Shortliffe in 1975 to mengkomodasikan uncertainty of thought (inexact reasoning) an expert. An expert such as a doctor often analyze the existing information with the phrase "may", "likely", "almost certainly". To mengkomodasikan it uses Certainty Factor (CF) in order to describe the level of confidence of experts on the matter at hand. In expressing degrees of certainty, certainty factoruntuk mengansumsikan degree of certainty an expert to the data. This concept was formulated in the following basic formula: The next calculation is a combination of two or lebihrule with different tetapidalam evidence of the same hypothesis. To calculate the percentage of the disease, then use the equation: -Cfpersentase = Cfkombinasi * 100% Information: 0.9 to 1 0.9 to 1 c. Ispa disease ISPA or respiratory tract infection is an infection that interfere with breathing humans in performing daily activities. In general, these infections are caused by a virus that attacks the trachea (breathing tube), lungs, and even countISPA The term is derived from three elements namely the infection, respiratory tract, where understanding as follows: a. InfeksiAdalah an incoming germs or microorganisms into the human body and multiply, causing the symptoms of respiratory tract infections. b. Respiratory tract is an organ from the nose to the alveoli along adneksanya organs such as the sinuses, middle ear and pleural cavity. Analysis a. Results of the calculations and diagnostics that have been done, it can be concluded others, such I with three symptoms terdiagnosadan 1 disease is pharyngitis with percentage Disease Pharyngitis is 84%, while others, such II with 2 symptoms and 2 disease is a disease farimgitis disease and pneumonia with a diagnosed disease pharyngitis by 91% and pneumonia by 67%. Conclusion The conclusion of the analysis and discussion of expert systems to diagnose respiratory infections by using certainty factor is as follows. a. Based on the diagnosis of respiratory infections by using certainty factor has done extensive research in the field Pirngadi hospitals and research data have done experiments in which the application is made. b. calculation results and diagnosis has been done it can be concluded others, such I with three symptoms terdiagnosadan 1 disease is pharyngitis with percentage Disease Pharyngitis is 84%, while others, such II with 2 symptoms and 2 disease is a disease farimgitis disease and pneumonia with a diagnosed disease pharyngitis by 91% and pneumonia by 67%. c. This application Certainty Factor implementasian with this method diagnose by choosing the symptoms of respiratory tract infections are inputted by the user or an expert first and then the data in the process, then the output is in the form of advice given based on symptoms inputted. d. The design of this application was built using some applications such as UML, Visual Studio 2010, and Microsoft Access 2007 for data storage.
1,653.4
2020-06-01T00:00:00.000
[ "Medicine", "Computer Science" ]
In-situ control of microdischarge characteristics in unipolar pulsed plasma electrolytic oxidation of aluminum Microdischarges occurring during plasma electrolytic oxidation are the main mechanism promoting oxide growth compared to classical anodization. When the dissipated energy by microdischarges during the coating process gets too large, high-intensity discharges might occur, which are detrimental to the oxide layer. In bipolar pulsed plasma electrolytic oxidation a so called ‘soft-sparking’ mode limits microdischarge growth. This method is not available for unipolar pulsing and for all material combinations. In this work, the authors provide a method to control the size- and intensity distributions of microdischarges by utilizing a multivariable closed-loop control. In-situ detection of microdischarge properties by CCD-camera measurements and fast image processing algorithms are deployed. The visible size of microdischarges is controlled by adjusting the duty cycle in a closed-loop feedback scheme, utilizing a PI-controller. Uncontrolled measurements are compared to controlled cases. The microdischarge sizes are controlled to a mean value of A=5⋅10−3mm2 and A=7⋅10−3mm2, respectively. Results for controlled cases show, that size and intensity distributions remain constant over the processing time of 35 minutes. Larger, high-intensity discharges can be effectively prevented. Optical emission spectra reveal, that certain spectral lines can be influenced or controlled with this method. Calculated black body radiation fits with very good agreement to measured continuum emission spectra ( T=3200 K). Variance of microdischarge size, emission intensity and continuum radiation between consecutive measurements is reduced to a large extent, promoting uniform microdischarge and oxide layer properties. A reduced variance in surface defects can be seen in SEM measurements, after coating for 35 minutes, for controlled cases. Surface defect study shows increased number density of microdischarge impact regions, while at the same time reducing pancake diameters, implying reduced microdischarge energies compared to uncontrolled cases. Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Introduction Plasma electrolytic oxidation (PEO) is a process to passivate various lightweight metals. Up until now, methods to coat Al, Ti, Mg, Zr, Zn, Mo, Nb, Si, Ta and their respective alloys have been found [1][2][3][4][5][6][7][8][9]. New results show that even carbon steels can be coated by PEO with an insulating oxide layer [10]. An addition of chemical compounds or even solids, like nanoparticles, increases the range of producible surfaces dramatically. Due to this flexibility, the PEO process has gained popularity over the last 20 years, as the number of journal publications per year has risen from around 20 in 2003, to over 450 in 2017 [11]. One of the main differences to classical anodization is an increased cell voltage. The increase of cell voltage over a process and material specific voltage, which is believed to be the breakdown voltage of the oxide layer, leads to the generation of microdischarges on the anode surface. These microdischarges are statistically distributed on the surface with lifetimes of 10-1000 µs [12][13][14]. In the literature, it is widely accepted that microdischarges are one of the main factors of enhanced coating growth characteristics in comparison to classical anodization [15]. Due to the temporally and spatially stochastic nature of microdischarges, a detailed analysis is a challenging task. The process of plasma electrolytic oxidation can be divided into two different regimes concerning the dynamic behaviour of voltage and current. The constant voltage mode is called potentiostatic mode. In this mode, the current is limited by oxide layer conductivity. A large current spike in the beginning is followed by a rapid fall in amplitude, due to the growth of oxide layer thickness [16,17]. The number and size of microdischarges exhibits a maximum in the beginning, where the current is at the maximum. As the current decreases, the individual microdischarge size rises to a certain level and decreases thereafter. This effect results in a limiting of the maximum microdischarge energy and reduces highintensity discharges, which can be detrimental to the oxide layer [18]. However, the coating growth rate is proportional to the current (or current density). This proportionality was derived from Faraday's law in classical anodization and was experimentally verified for the PEO process [19,20]. The PEO process does not obey Faraday's law anymore, as it is much more complex, but the dependency on current density is still accurate. So the self-limited scaling behaviour is accompanied by a reduced coating growth in the potentiostatic mode. In contrast, the galvanostatic mode exhibits a constant coating growth over time, utilizing a constant current mode [19]. In this case, the voltage rises rapidly in the non-plasma anodization phase, while growing slower during the onset of microdischarges and behaves proportional to the oxide layer thickness [19]. At this point, microdischarges are relatively small, but large in number density [21]. With growth of the oxide layer the size of microdischarges increases and the corresponding number density decreases. The lifetime and current per individual microdischarge increases as well [21]. A rise in generator voltage leads to more energetic microdischarges, as individual surface area and depth of the oxide layer increases. As a result, by keeping the generator current constant, the number of simultaneous microdischarges has to fall. If individual current of microdischarges becomes too large, arcing might occur [1]. To prevent this, a method has been developed to limit the maximum size and energy per discharge for the galvanostatic mode. In bipolar pulsed plasma electrolytic oxidation the charge ratio between positive and negative half cycles is important to generate a so called soft-sparking mode [22]. Martin et al found that the build-up of negative and positive charge plays an important role in the onset of a so-called 'soft-sparking' mode [23]. As a matter of fact, the negative half cycle is important for decharging the oxide surface and changing the dynamic behaviour of microdischarges. It was observed that the strongest microdischarges were reduced compared to unipolar pulsing and reduced electron temperature spikes were recorded using optical emission spectroscopy [15]. However, bipolar pulsing can also be disadvantageous, as the changed polarity also leads to dissolution of the oxide layer. Gao et al reported that negative pulsing might result in reduced corrosion resistance and an increase in surface defects compared to unipolar pulsing for treated magnesium substrates [24]. A soft-sparking mode established by bipolar pulsing may therefore not be applicable in every process. These findings were supported by Tsai and Chou, who found that the dense inner layer of coatings on magnesium and titanium were inferior compared to the inner layer of aluminum coatings. Comparable coating characteristics to aluminum could only be reproduced by inserting aluminate anions to the electrolyte [25]. For unipolar pulsed plasma electrolytic oxidation there is no method known comparable to a 'soft-sparking' mode. Dehnavi et al reported reduced duty cycles in a unipolar pulsed configuration may lead to a slightly higher number of microdischarges and slightly smaller individual size of microdischarges [26]. That study was devoted to parametric changes of electrical parameters and gave way to following research in this area. Typically, the PEO process is set to certain electrical parameters at the beginning, which are constant over time (galvanostatic or potentiostatic operation). As there is no feedback loop regarding microdischarge characteristics, the microdischarge behaviour changes with growth of the oxide layer. In this work, a method to keep the microdischarge size constant over time by a control of pulse duty cycle and current amplitude is proposed (closed-loop control). The current amplitude I(t) is increased anti-proportionally to the duty cycle, to compensate for a smaller pulse-on time T on (t). This ensures a constant charge transfer Q(t) to the sample, which would otherwise be reduced with coating time, like in the potentiostatic mode of operation. Experimental setup A schematic of the experimental setup is shown in figure 1. The process chamber consists of a double walled glass cylinder with mountings of optical grade, quartz glass windows on opposite sides. A cooling pump (Julabo FP-35) is connected to a heat exchanger system to keep the electrolyte temperature under control. Heat exchange and mixing of electrolyte is achieved by a magnetic stirrer on the bottom of the vessel. A PTFE coated PT-100 sensor is connected to an A/D-converter and measures the electrolyte temperature during the coating process. A cylindrical stainless steel electrode acts as the cathode. Aluminum (6061) specimen are cut to 20 × 10 mm in size and are polished, degreased and rinsed before treatment. The electrolyte consists of distilled water with an addition of 1 g/l potassium hydroxide (KOH). A power supply (Magpuls MP2-30) generates a rectangular, unipolar pulsed waveform, while controlling the time averaged current. Possible frequency range is between 50 Hz to 100 kHz. In the beginning of each treatment, the generator current is set to 0.5 A for a duty cycle of 0.5. Pulse on-time is set to 200 µs. This leads to an initial frequency of 2500 kHz. Reduction of duty cycle is achieved by a decrease in pulse-on time. The pulseoff time is constant (200 µs). A minimum pulse-on time of 10 µs is programmed into the control setup to limit the minimum value in case of error. For this case, a maximum pulsing frequency of 4.76 kHz would be applied. The current amplitude is increased anti-proportionally to the duty cycle, so that a constant charge transfer is achieved. Aluminum specimen are processed for a duration of 35 minutes. A scientific CCDcamera (PCO sensicam qe) records microdischarges during the process with a repetition frequency of 1 Hz and an exposure time of 20 ms. The working distance, together with a zoom lens (Navitar 6000 Zoom Lens), is adjusted to reach a pixel size of 13 µm. A digital oscilloscope (LeCroy Waverunner 8254) is equipped with a differential voltage divider (Tektronix ADP305) and a current probe (LeCroy CP030) for the monitoring of discharge voltage and current for each pulse and over the processing time. Optical emission spectra of microdischarges are recorded with an OceanOptics QE65 000 spectrometer. Emission spectra are recorded with a repetition rate of 1 Hz, an exposure time of 1000 ms and corrected to the efficiency of the used spectrometer. The optical spectrometer is relatively calibrated by means of tungsten ribbon lamp and a deuterium lamp for a wavelength range of 200 nm to 950 nm. All are controlled by LabView software and generated data is transferred to a personal computer. Defect size and structure of aluminum specimen after treatment are investigated with a scanning electron microscope (Jeol JSM-6510). Discharge detection and closed-loop control Microdischarges are analyzed with an in-house developed MATLAB code. In a first step, the contrast is increased to the boundaries of each intensity histogram. Noise is reduced by using a two-dimensional 5 x 5 Wiener filter. Inhomogeneous lighting and 'hot-pixels' are removed by subtracting a background image without microdischarge emission. Following is image segregation. In image processing, there are several methods for image segregation and object identification. Greyscale images can either be filtered by edge detection or intensity based algorithms. Two-dimensional edge detection algorithms, like the sobel operator, are typically computationally intensive. The algorithm scales with f ∈ O(n 2 ) in Landau notation, where n denotes the number of pixels [27]. In contrast to that, intensity based algorithms, like thresholding high-pass filters, scale with f ∈ O(n) [27]. The resolution (can be seen as n) of each image has to be as high as possible, ensuring that the minimum size of a microdischarge is as large as the pixel size. Therefore, a simple threshold based algorithm is implemented for microdischarge segmentation. In image processing, the threshold value for the high-pass filter is often deducted by unsupervised adaptive threshold methods. These methods, like Otsu's method [28], are computationally intensive and therefore not suitable for an in-situ application. Furthermore, the threshold level changes the number of pixels detected in an image and a time-variant value would change the size of detected microdischarges. For the sake of comparability, a fixed threshold is chosen. The value is chosen by taking a raw set of expected images in an uncontrolled case and running the method of maximum entropy by Yin [29]. By applying this threshold filter, the image is converted into a binary image. Following this procedure, coherent pixels are merged and then labelled. The size of individual microdischarges equals the number of pixels, multiplied with the area per pixel (pixel area: 13 · 13 µ m 2 ), determined by the optical setup. The mean microdischarge size is calculated for each frame by dividing the summed microdischarge area by the number of microdischarges. Graphical demonstration of the microdischarge detection algorithm is shown in figure 2. Mean microdischarge size, photoemission intensities and discharge number per recording are evaluated and saved in a text file. Figure 3 visualizes the multivariable control loop approach by using a block diagram. The manipulated variables are the duty cycle (or T on ), while the time averaged current and the microdischarge size are the controlled variables. Both control loops serve different goals. The averaged current I is internally controlled by the power supply to maintain galvanostatic operation with constant charge transfer in respect to the processing time. The transferred charge Q is calculated using Control of the microdischarge size is performed by the optical detection path and a software implemented PI-controller manipulating the duty cycle. A LabView program controls the whole process from the hardware-and software side. Camera recordings taken with a repetition rate of 1 Hz are saved in a binary format and read by the MATLAB discharge detection code. Evaluated information is stored in a text file and read by LabView. This information can be used as a process variable for a closed-loop control of the mean microdischarge size. A digital PI-controller is used as the controlling element. PI-controller parameters are calculated by instability criteria [30]. Yerokhin et al [31] found that the system response for voltage and current are linear in a wide range of operating voltages. In case of a change in duty cycle and current amplitude, the change in discharge size and discharge number are non-linear. The plasma electrolytic oxidation process itself is time-variant, depending mainly on the oxide surface thickness and composition. In control engineering this is called process state variable. For a time interval of tens of seconds, the system response, and therefore the state, can be linearized to a time-invariant timescale [32]. A non-linear and time variant system response makes it more difficult to find adequate PIcontroller values. Therefore, a gain scheduling scheme is used in this work [33]. The sensitivity to changes in the duty-cycle increases with increasing oxide thickness, therefore the proportional value is decreased with processing time. The digital PI-controller programmed in LabView transfers a new value for the duty cycle to the power supply. The microdischarge detection algorithm takes around 100 ms (160 ms in the worst case) to complete on our personal computer system. The entire process loop takes less than t < 500 ms, so that a repetition rate of 1 Hz can be maintained, which is sufficient for a processing time of 30 minutes and the slow changes of state in the process. To validate the control setup and detection algorithms, uncontrolled measurements are compared to controlled cases set to a mean discharge size value of A 1 = 5 · 10 −3 mm 2 and to a value of A 2 = 7 · 10 −3 mm 2 , respectively. The value for A 2 is chosen, because it is slightly lower than the mean value in the uncontrolled case, before large, high-intensity discharges appear (A uncon. = 8.6 · 10 −3 mm 2 ). Staying under this value is later proven to effectively suppress high-intensity discharges. A 1 is chosen as a set point to show the differences in results for even lower values. In our case, the value cannot be set lower, as the voltage increases with smaller set point values and with increasing coating time. The used generator can only supply a maximum voltage of 1000 V. Therefore, set points lower than A 1 would lead to voltages exceeding the maximum generator voltage. Results and discussion This section is divided into three parts. Subsection 3.1 covers a proof of function of the multivariable control loop by investigating the recorded microdischarge characteristics and the corresponding electrical parameters. Subsequently, subsection 3.2 is devoted to optical emission spectra and their interpretation. Finally, impact of controlled and uncontrolled microdischarges on the coating surface is discussed in subsection 3.3. Figure 4 shows the evolution of mean microdischarge size and number of microdischarges during the process under controlled and uncontrolled processing schemes. In the first 750 s, the uncontrolled mean discharge size rises linearly. After that stage, the gradient decreases until a size of A = 8.6 · 10 −3 mm 2 is reached. At t = 1550 s the mean discharge size sharply rises and reaches an average value of around A = 15.2 · 10 −3 mm 2 , while the scatter between each measurement drastically increases. This phase can be associated with a high probability for large, high-intensity discharges. These have to be avoided, as the oxide layer quality may diminish under their influence. In the beginning, the discharge size for the controlled cases rises with nearly the same gradient as the uncontrolled case. In this phase, the duty cycle is not changed, because the set point is not reached yet. At the moment, when the mean discharge size reaches the set point (A 1 = 5 · 10 −3 mm 2 and A 2 = 7 · 10 −3 mm 2 , respectively), the duty cycle is adjusted by the PI-controller. From this point on, the mean discharge sizes are stabilized in value and the respective curves follow a straight line parallel to the x-axis. This behaviour proves the applicability of the control method for stabilizing the mean discharge size. The scatter in mean size for the controlled cases slightly increases after 1500 s. Table 1 summarizes statistical quantities of uncontrolled and controlled measurements. The first timespan (1000-1500 s) is chosen, because in this region, all measurements reached their corresponding set point and the region of high-intensity, large discharges has not begun yet. The second timespan (1500-2000 s) is set within this larger discharge regime. It is important to divide the time into two regions, as the behaviour in the uncontrolled case is quite different between both. The mean size (uncontrolled) roughly doubles in value, while the relative standard deviation (RSD) increases from 10.1 % to 27.6 %. The controlled case mean sizes match their respective set points well. The relative standard deviation also rises, but stays below 13.3 % for all measurements and times. Comparing the averaged mean sizes with their respective set points reveals a maximum control deviation of 1.4 %. Evaluating microdischarge characteristics and electrical parameters An increase in scatter can easily be explained using figure 4(b), which represents the counted number of discharges per exposure as a function of processing time. In the first 500 s all curves are overlapping with only a small error due to uncertainty error between measurements. It is expected that the number density decreases over time and the mean discharge size increases [21,34]. In our case, the discharge number starts at zero and increases until a local maximum is reached. This region has not been investigated by Yerokhin et al [21] and Petkovic et al [34], as the data starts after a 9.40 % few minutes after onset of plasma emission. The reason for a rise in number density in the beginning can be found in the combination of the discharge detection algorithm and the exposure time. To avoid an overexposure, the exposure time (t exp = 20ms) has been adjusted, so that the strongest discharges at the end of the process do not cause overexposure. The microdischarges have a minimum individual photoemission in the beginning of the process and can therefore not be distinguished from the background noise. This error does not influence the presented method, as the discharge size is smallest in the beginning. After a local maximum, the number density decreases as expected. The curve of the uncontrolled case shows a second local maximum around 1500 s. This may be related to the generator frequency of f = 2.5 kHz. Comparable measurements at lower frequencies do not show a second maximum. The reason for this second maximum is still unclear and needs further investigation, due to sparse data in the literature. With the beginning of the controlled phase, the duty cycle is adjusted, which also has an effect on the number density. The number density increases with a decreasing duty cycle. As stated before, the generator current is increased at the same rate, as the duty cycle is decreased. Therefore, the number density follows an increase in current. The second maximum can even become larger than the first maximum for a set point of A 1 = 5 · 10 −3 mm 2 . For the uncontrolled case, the number density drops to single digits. The number density in the controlled cases is one order of magnitude higher than in the uncontrolled case. The largest reduction in duty cycle is at the beginning of the controlled phase (compare with figure 5). This is why the number density rises fast in this initial control regime. After that, the change in duty cycle is smaller and the number density starts to reduce again in later stages. Monitoring and controlling the output of the electrical generator is important for the robustness of the control setup in regards to errors. Voltage and current at the electrical cell are measured with a differential voltage and current probe. The mean voltage during pulse-on time plotted over the processing time is shown in figure 5(a). Figure 5(b) displays the duty cycle during the same time frame. The voltage curves can be divided into three phases. Phase one is the non-plasma anodization phase, as the critical breakdown voltage has not been reached yet. This transitional phase is defined by a fast rise in cell voltage. Following, the second phase exhibits a lower, but constant, cell voltage gradient. Both observed phases are well documented and in agreement to findings in the literature [35].The third phase is an onset of the controlled regime by reducing the duty cycle. As the duty cycle is reduced, the voltage starts to deviate from the straight line of the uncontrolled regime. The effect of voltage increase over decreasing duty cycle can be found in the literature [35,36]. It can be attributed to a higher reignition voltage needed after longer pulse-off times. Relaxation times, like charge transport, cooling of oxide material and gas evolution may be important factors comparing to the pulseoff time. As the system is non-linear and time variant, the duty cycle cannot remain constant and changes over the course of the coating time to re-adjust changes in microdischarge properties. This shows the need for an additional closed-control loop regarding microdischarge properties, as an open-loop control is susceptible to errors and deviations in process parameters. The method of bipolar 'soft-sparking' does only limit the microdischarge size and photoemission intensity. It cannot control the value of microdischarge size and is not suitable for every process (see [24,25]). Comparing both methods, it turned out that each has their advantage and disadvantage. The bipolar 'soft-sparking' mode is very easy to implement, but needs a bipolar pulsed generator and cannot control to a precise value. The unipolar control setup is able to control the microdischarge size, but additional optical setup and software, as well as a transparent electrolyte, are needed. Voltage and current pulses change in amplitude, length and appearance during the control regime. Current pulses for all three measurements are displayed in figures 6(d)-(f). As amplitude and duty cycle in the uncontrolled case do not change, there is no visible difference in the current pulses over time, except for a small increase in the initial current spike, caused by the capacitive-resistive impedance of the electrolytic cell. Charge transport current at the oxide-electrolyte interface is dominant in the beginning of the pulse-on time. As the breakdown voltage is reached, the current is dominated by the microdischarge current. The corresponding voltage pulses (see figures 6(a)-(c)) are defined by this impedance and the controlled current pulses. A decrease in duty cycle and accompanied increase in current over the treated surface area changes both current and voltage pulses. To compensate for a reduced pulse-on time, the current is increased by the generator. This also increases the current spike in the first microseconds of a current pulse. After 1400 s, and for a set point of A = 5 · 10 −3 mm 2 , the current spike reaches a maximum of 5 A, while at the same time the voltage reaches a maximum of around 1000 V. A suitable power supply is recommended, so it can supply these short spikes in current and voltage in comparison to just galvanostatic operation. The mean microdischarge size is only an aggregate evaluation of a distribution of individual microdischarges. It may not give inside to distributional changes for different control regimes over processing time. Microdischarge photoemission intensities plotted over the respective size are shown in figure 7. Each plot is a summation of discharges around a certain point of time and five measurements before and after, respectively. This ensures that the sample size for later processing times (t > 1400 s) is high enough to see differences in statistical behaviour. To compare the uncontrolled and controlled regime (A 2 = 7 · 10 −3 mm 2 ) directly, both measurements are shown in the same plot in different colours. There is too much overlap between all three measurements, hence only two are presented at the same time for clarity. Up until 400 s, the scatter plots between control regimes show nearly the same behaviour and differences can be attributed to uncertainties in the measurements. As can be seen, the mean value for the controlled case shows a maximum mean discharge size of A 2 = 7 · 10 −3 mm 2 , which fits well to the set point. In contrast, the uncontrolled case drifts towards higher microdischarge sizes and higher intensities. The strong decrease in microdischarge number density for long processing times is apparent between measurements at 1600 s and 1800 s. Controlling the size also reduces intensities of larger discharges (A ≥ 10 −2 mm 2 ). Careful optimization of the optical setup is significant for reliable global microdischarge estimation. The exposure time has to be adapted, ensuring that signal-to-noise ratio is high enough for microdischarge detection. Furthermore, overexposure has to be avoided. Performing a parameter sweep analysis of the exposure time at a given set point helps minimizing measurement errors. ICCD (intensified charge-coupled device) cameras are not suited for this method, as the intensities can be high enough, but the exposure time may be too short for statistical analysis. In addition, there is a trade-off between optical resolution and microdischarge sample size. Microdischarges have to be larger than one pixel, but the overall area studied has to be as large as possible. By evaluating scatter plots during the studied process parameters, we were able to ensure that the share of microdischarges being at the lower limits of detection were always below 3 % of the total number. This is not the case for the first 100 s, because the intensities and discharge sizes might be lower than the detection limits. This matter of fact may not play a role for the method of controlling microdischarge sizes, because this part of time is not of interest, due to mentioned smaller size distribution. To visualize distributions of single microdischarge characteristics, histogram plots give additional insight over scatter plots. This is shown in figure 8 for intensity distributions and in figure 9 for size distributions. Measurements are presented for a process time of 200 s and 1400 s, because at 200 s both controlled and uncontrolled regimes should show the same distributions and at 1400 s should show a large deviation between measurements, while at the same time exhibiting a large enough sample size. In figures 8(a) and 9(a) the distributions of controlled and uncontrolled regimes match to a high degree, as both are uncontrolled at this point of time. That means there is a good repeatability in the detection of microdischarges and the PEO process itself from a statistical point of view. The falling slope is important for determining the distribution of microdischarge intensities and sizes. Looking at discharge intensities (see figure 8), the number of discharges with low intensities is much higher in the controlled case (at t = 1400 s) compared to the uncontrolled case. Additionally, the falling slope is steeper. This leads to a distribution where lower intensities are more dominant. The same statement can be made for microdischarge sizes, however the effect is not that pronounced in comparison to microdischarge intensities. Hussein et al classified microdischarges into A-, B-and C-type discharges [15]. B-type discharges are said to be large, high-intensity discharges. Combining the insights of scatter plots and histograms, we can conclude that microdischarge sizes and photoemission intensities correlate to a high degree. Controlling the mean size of microdischarges is an effective way to decrease high-intensity discharges. B-type discharges may be reduced considerably, as size and intensity distributions are kept constant over the whole processing time. Figure 10 shows optical emission spectra (uncontrolled) recorded with a repetition rate of 1 Hz at a spectral range between 200 nm ≤ λ ≤ 900 nm. The strongest lines are emitted by aluminum, hydrogen, potassium, hydroxyl radicals, oxygen and magnesium (corresponding wavelengths are denoted in figure 10). In the first 1000 s, the changes in emission spectra between consecutive exposures is relatively small. After this initial stage, drastic changes appear between consecutive measurements. In the later stages the emission lines may vary by multiple orders of magnitude and the behaviour gets more and more stochastic, even considering an exposure time of 1000 ms. Controlled case measurements are shown in figure 11. Fluctuations between consecutive measurements are much smaller and long-term trends in behaviour are easier to detect. Except for a few emission lines, the trend between uncontrolled and controlled spectra over time are generally equal (while neglecting differences in variances between measurements). The two main reasons are an increased number density of microdscharges in the later stages by controlling the process and, much more important, Emission lines are a product of bound-bound transitions, e.g. spontaneous emission of excited species. The broadband continuum in figures 10 and 11 may have multiple origins. It was proposed by different groups that the broadband emission originates from free-bound transitions, e.g. charged species recombination or bremsstrahlung [12,34]. These studies do not elaborate the reasons onto why this may be the main process. Another contribution may be black body radiation emitted by the heated substrate metal and metal-oxides. Electron temperatures inside of microdischarges are assumed to be between T e, in = 7000 − 15000 K for microdischarge cores or very intense microdischarges and between T e, out = 3000 − 6000 K for outer regions or lower intensity microdischarges [12,15,37]. Melting and recrystallization are observed for produced oxide layers, hinting at high substrate temperatures. Temperature and wavelength dependent emission B(λ, T) of a black body radiator is defined by Planck's law: Optical emission spectroscopy where h is the Planck constant, c the material dependent speed of light and k B the Boltzmann constant. Photons can be absorbed while travelling through the electrolyte. As a matter of fact, water has a wavelength dependant absorption coefficient α(λ) (cf [38]). The reduced intensity I(λ) of absorbed emission can be estimated with the Beer-Lambert law: I 0 is the initial emission intensity and x is the length of the absorbing medium. The absorption length from substrate to the optical window is estimated to be x = 60 mm. The unit of calculated intensities is denoted in W/nm, whereas the spectrometer is calibrated to measure N phot /(s · nm) (N phot is the number of photons). To compare the calculated black body emission with the measured data, the calculated emission intensity has to be divided by the energy E = hc/λ. Calculated black body emission in comparison with experimental data can be seen in figure 12. The black body radiation shows very good agreement with normalized emission spectra. A maximum intensity of the measured continuum can be found at a wavelength of 802 nm. This maximum corresponds to a temperature of T = 3200 K for combined Planck's law and water absorption. This temperature is just below the boiling point of aluminum oxide (T Al2O3 = 3250 K). The electrolyte absorption coefficient is simplified to behave like distilled water. Therefore, a more realistic absorption coefficient, taking the addition of potassium hydroxide into account, may lead to increased absorption and even better agreement in the near infrared region. Measured intensities at wavelengths around 200 nm and after 900 nm are overestimated due to low efficiency of the optical spectrometer and low signal-to-noise ratio in these spectral ranges. The very good agreement of calculated black body radiation with the measured continuum spectra may lead to the conclusion that thermal radiation by fast heating and melting of oxide material is the main reason for continuum radiation in our setup. Comparison of the temporal evolution of controlled and uncontrolled continuum emission intensities at the maximum of 802 nm are shown in figure 12(a). Both cases show the same general behaviour of reaching a maximum and declining afterwards, while in the controlled case the maximum is reached slightly earlier. The maximum scatter between two consecutive measurements becomes very large in the uncontrolled case (a factor of 5.3 between I(t = 1966 s) = 2.3 · 10 7 and I(t = 1967 s) = 1.9 · 10 8 ), while the maximum scatter in the controlled case is much smaller (a factor of 1.45 with I(t = 1824 s) = 1.1 · 10 8 and I(t = 1825 s = 1.6 · 10 8 ). Concluding from this, the continuum emission is less stochastic in the controlled case and predictability between measurements increases. As the scatter decreases, the inherent fluctuation of continuum production mechanisms may also decrease. This is significant, as it is an indication for less fluctuation in intensity or active radiation area on the surface. Following this, these findings support a possible reduction of strong B-type microdischarges proposed in subsection 3.2. As was already mentioned, the temporal behaviour of most emission lines is similar between controlled and uncontrolled regimes. Two noteworthy examples of contrary behaviour are shown in figure 13. In figure 13(a) the emission intensity of the potassium line at 769.9 nm is plotted over processing time. As the duty cycle is decreased from the standard value of 0.5, the emission of potassium starts to rise. The maximum difference (a factor of approx. 5.1) between the controlled and uncontrolled case is at 1510 s. After this point in time, the emission intensities seem to equalize again. In contrast to these cases is the H α emission of hydrogen. The uncontrolled emission changes in a similar way to the potassium emission, but the controlled emission remains stable over the process time (compare dashed line in figure 13(b). Both examples show that it is possible to either change or control the emission of certain elements. This is interesting, as possible material properties or surface functionalities could be promoted by changing the emission of particular species (e.g. potassium or hydrogen). Scanning electron micropscopy SEM measurements show the direct impact of microdischarges on the visible surface region. Differences between the studied control regimes are shown in figure 14. The largest differences can be seen in the porosity of the treated surfaces, namely the number density of microdischarge holes, and diameters of melted and solidified areas around these holes, so called pancake structures. Ten images from the central region of the substrate are taken into account for quantification. The number density in the uncontrolled case is approx. n hole = (198 ± 20) mm −2 , for the controlled case of A 2 = 7 · 10 −3 mm 2 it is approx. n hole = (471 ± 20) mm −2 and for the controlled case of A 1 = 5 · 10 −3 mm 2 it is approx. n hole = (773 ± 20) mm −2 . This leads to a maximum increase in number density by a factor of 3.9 between controlled and uncontrolled samples. Further, the difference in pancake radii can be estimated. First of all, in the uncontrolled regime, the mean radius is approx. r pan = (10.31 ± 2) µ m. In the controlled case of A 2 = 7 · 10 −3 mm 2 , the mean radius is approx. r pan = (7.32 ± 2) µ m. Whereas for the smallest case of A 1 = 5 · 10 −3 mm 2 the mean radius is approx. r pan = (5.92 ± 2) µ m. Zhuang et al compared the surface structure of treated substrates under variation of current density. Their research showed an increase of pancake radius and a decrease in hole number density for higher currents over the treated substrate area [20]. In this study, the current density is increased as well, but at the same time reducing the pulse-on time. This negates the effect of an increase in pancake radii and a reduction in hole number densities. As a matter of fact, the radius of the pancake structure is an indirect estimate for the energy that is dissipated into the oxide layer by microdischarges. A higher dissipated energy, assuming the energy conversion mechanisms between all cases stay constant, would lead to higher microdischarge energies, which mainly depends on the microdischarge radius and the electron density n e . Microdischarge energies largely depend on the microdischarge radius and electron density n e . Due to lack of data in the literature regarding electron densities depending on time, frequency or duty cycle, a change in electron density cannot be excluded. Nevertheless, a change in mean microdischarge area between the different control regimes can explain a large part of the change in mean pancake diameter. The intensities that we measured in 3.1 may not be used as an estimation of microdischarge energies, as they tend to ignite in cascades at the same spot [39]. However, cascading microdischarges should not affect the pancake radius, as it is mostly influenced by individual microdischarge impact. Estimated pancake radii are further evidence that strong B-type discharges can be suppressed. Finally, controlling the unipolar plasma electrolytic process with the presented method may control the microdischarge energy and therefore tailor the dissipated energy onto the oxide layer. Conclusion In this work, we introduced a method to control the mean microdischarge size during unipolar pulsed plasma electrolytic oxidation. This method is based on an in-situ discharge detection process, consisting of CCD-camera measurements followed by fast image processing algorithms. A PI-controller adjusts duty cycle and total current over treated substrate area for stabilizing the mean discharge size. Distribution of microdischarge statistics, e.g. mean size, number of discharges and discharge photoemission intensity have been studied. Applicability and control error have been estimated. Differences in photoemission spectra between controlled and uncontrolled regimes were studied and possible continuum production mechanisms were discussed. Lastly, the impact of microdischarges on the surface of produced aluminum oxide coatings was studied. The most import findings are summarized in the following: • Mean microdischarge size can be controlled to a given set point with a maximum control deviation of 1.4 %. Relative standard deviation of mean microdischarge sizes after 35 min of processing can be reduced from 27.6 % to 9.40 %. High-intensity, large discharges can be effectively suppressed with a reduced mean microdischarge size by an elimination of strong B-type discharges. • Number of microdischarges increases when decreasing the duty cycle, most importantly because of an increase in current density. Not only the mean microdischarge size can be controlled, but also the size and intensity distributions over time. • Short term voltage, current and power demand during pulse-on time increase compared to classical galvanostatic operated PEO. • Intensity of certain emission lines can be promoted (potassium) or controlled (hydrogen) compared to classical galvanostatic or potentiostatic PEO. This may open the possibility to new surface process techniques, like multistep approaches. In a first step, a desired coating could be produced using any kind of process parameters. Using the microdischarge control technique as a second step, a functional surface coating can be produced, where microdischarge control opens the possibility to change plasma excitation dynamics (compare subsection 3.2) or coating surface properties (compare subsection 3.3). • Calculated black body emission intensities are in very good agreement with measured continuum spectra (T = 3200 K). It is concluded that in our setup the main origin of continuum emission is black body radiation. Variance in continuum emission intensity can be reduced, which may lead to less variance in local surface oxide temperatures or active radiating surface area. • Number of microdischarge induced pancake structures can be increased, while decreasing the mean radius of melted and solidified areas (pancake radius). Together with the results from the previous camera measurements and optical emission spectroscopy, it is very likely that microdischarge energies can be controlled with this method. Further studies have to be devoted to high frequency pulsing of the plasma electrolytic process and the interaction of electrical parameters and microdischarges in this frequency range. In addition, this work was mostly devoted to microdischarge characteristics and their control. The scope of further investigations has to be on the characterization of surfaces and material compositions created with the presented closed-loop control of unipolar pulsed plasma electrolytic oxidation.
9,742.6
2020-08-05T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Automated Extraction of Labels from Large-Scale Historical Maps Historical maps are frequently neither readable, searchable nor analyzable by machines due to lacking databases or ancillary information about their content. Identifying and annotating map labels is seen as a first step towards an automated legibility of those. This article investigates a universal and transferable methodology for the work with large-scale historical maps and their comparability to others while reducing manual intervention to a minimum. We present an endto-end approach which increases the number of true positive identified labels by combining available text detection, recognition, and similarity measuring tools with own enhancements. The comparison of recognized historical with current street names produces a satisfactory accordance which can be used to assign their point-like representatives within a final rough georeferencing. The demonstrated workflow facilitates a spatial orientation within large-scale historical maps by enabling the establishment of relating databases. Assigning the identified labels to the geometries of related map features may contribute to machine-readable and analyzable historical maps. Introduction Automatically extracting labels from historical maps is not as straightforward as it is the case for current maps (Chiang, 2017;Lin and Chiang, 2018). A frequent lack of in-depth information, which is generally implemented by databases within current maps, impairs a simple search or analysis of places, street or building names, and other local designations within historical maps. As a large part of existing attempts are restricted to e.g. a particular cartographic style and therefore not transferable to others, detecting text in these scanned and often complex maps is an ongoing challenge . The purpose of this study is to demonstrate a universal solution for an automated detection and recognition of text elements from large-scale (≥1:10,000, Kohlstock (2004)) historical maps without the need of making major individual adjustments for individual maps. With this goal in mind, we have been able to locate and label geographical features which, in general, are not accessible from historical maps. Besides, a contribution to an approximate georeferencing of historical maps has been made. We present an automated workflow for detecting and recognizing labels from historical maps and comparing them with current street names. This matching enables a spatial referencing of further streets and places so that an initial spatial orientation within a historical map is possible. The gained information may be useful for subsequent database productions or comparisons between different maps, e.g. from various periods. This paper is structured as follows. In Sect. 2, an overview of current challenges and related work concerning text extraction from historical maps is presented. Section 3 illustrates details on our used data and methodology before experimental results of individual stages (detection, recognition, and comparison of map labels) of our end-to-end approach are reported in Sect. 4. Finally, Sect. 5 concludes the paper by discussing further potential enhancements and future work. text and other map elements such as geometries of buildings or roads is considered as major challenge. For monochrome historical maps, this differentiation cannot solely be based on on color information (Iosifescu et al., 2016). The great stylistic variety among historical maps and their individual typefaces raise a claim for further differentiators when applying automated approaches such as machine learning. Recurring patterns and shapes, which are utilized e.g. in the course of automated face detection or the identification of roads for autonomous driving, can rarely be found within old maps and their labelling . Other, technically induced drawbacks turning the described issue into a complex task are a low graphical quality or perspective distortions which are caused by scanning processes, for instance. The mentioned aspects often cause unsatisfactory results when applying (semi-)automated text detection and recognition to historical maps. Manual postprocessing becomes necessary as soon as parts of map labels have not been identified or a context to similar words is missing (Chiang et al., 2020;. Both automated and semi-automated processes aiming at the identification of text in historical maps imply a series of advantages and drawbacks. With semiautomated methods a higher recognition rate of a greater variety of map content can be achieved, whereas, at the same time, laborious manual processing is essential. Hence, only a small quantity of maps can be processed by such time-consuming approaches. Previous research also showed limitations to highly specific map types or typefaces (e.g. straight aligned and horizontal labels or uniform text sizes) do exist . A number of authors have suggested the utilization of a Hough transform to extract text from images or maps but have not not considered curved labels (Fletcher and Kasturi, 1988;Velázquez and Levachkine, 2004) or even alphabetic characters (Chen and Wang, 1997). Methods employed by Goto and Aso (1999) and Pouderoux et al. (2007) which identify text in maps based on the geometry of individual connected components do not consider characters of various sizes. Cao and Tan (2002) made use of individual thresholds to detach the black map layer consisting of text and contours as well as of connected components to differentiate between those. Although this is considered a much faster approach compared to a Hough transform, their tailor-made size filters cannot handle overlaps between text and other map features apart from specific line types (Tombre et al., 2002). An increasing number of studies are based on the early involvement of a gazetteer or a comparable database available from other sources to match place names with those extracted by small-scale maps (Milleville et al., 2020;Simon et al., 2014;Weinman, 2013). However, this so-called geoparsing only works with a comprehensive gazetteer and for place names which do not shift over time. These rarely exist for historical large-scale maps. To properly address the mentioned issues, Laumer et al. (2020) assigned each pixel either to a map's foreground (resp. labels) or background with the help of convolutional neural networks. Within their approach, labels, or rather clusters built up from interrelated characters, were interpreted, manually matched, and corrected by the combined use of Google's Vision API and a local gazetteer. Machine learning approaches may enable a universal solution to automatically detect and extract text from a variety of maps. Although their application requires a large amount of input training data it offers the advantage to process data without any manual intervention (Chiang et al., 2020). With Strabo, provide a command line tool for detecting text within maps which is not only based on color differences but also on other characteristics such as the similarity of text sizes or distance measures between individual characters. Its application may be promising when examining monochrome maps. Until now, machine learning has not been widely used to analyze historical maps. Instead, binarized connected components or other bottom-up approaches have been applied onto maps to detect labels (Weinman et al., 2019). So far lacking in the scientific literature, this paper addresses an appropriate combination of automated text detection and recognition from historical large-scale maps with the aim of extracting machine-readable information. Data For demonstrating our suggested approach with an illustrative example, we chose a large-scale historical map of the city of Hamburg from 1841 (exemplary extract in Fig. 1). Map features such as buildings, builtup areas, roads, railroads, stations, drainage, and docks are illustrated (Hamburg, Germany 1853). Due to the map's salient color composition and texture the human perception of map objects and their differentiation is facilitated (Schlegel, 2019). The dark labels, primarily designating streets, squares, and water bodies are clearly visible on the bright background but frequently connected to or even overlapping textured objects. According to general recommendations, a high resolution (≥300 dpi) of the scanned input map is ideal so that characters are large enough to be readable by automatic text recognition tools (Milleville et al., 2020). With regard to a reduction of computational cost and time, an appropriate map subset illustrating as many differing map features as possible was chosen for further procedure. The input image, as seen in Fig. 1, was stored in lossless PNG format. Text detection With the objectives of • reducing manual user interaction within the entire workflow and • increasing the number of true positive labels for a subsequent text recognition a separation of the map's text from non-text elements was performed using the automatic machine learning approach Strabo 1 Weinman et al., 2019). Being based on OpenCV's EAST text detector, Strabo is able to detect cartographic labels of different typefaces, sizes, orientations, and curvatures and even those overlapping 1 Li et al. (2019) with other map elements Tombre et al., 2002). Also, blurred, reflective, or partially obscured input images can be processed up to a certain point (Rosebrock, 2018a). The open source tool implements functions of available Python libraries (e.g. NumPy, OpenCV, SciPy, TensorFlow, Matplotlib) for vector and image processing, statistical computation, machine learning, and visualization. It separates a text layer from the rest of an input image based on differences in color, text size ratios, and appropriate text samples . As an output, Strabo supplies a vector dataset including rectangular bounding boxes each holding an (raster) input image area where text was detected (see upper third of Fig. A1 in Appendix). Additional adjustments As is the case with many applications, Strabo regularly detects only parts of map labels or even omits them entirely. Further manual post-processing is necessary for these results (Chiang et al., 2020). While avoiding an individual editing for each map -whether via preor post-processing -we focus on a universal solution to this issue. Regardless of a map's apparent condition, year of creation, style, or color composition a transferability to other similar large-scale maps is desirable. When working with Strabo we could determine the following points which might have prevented an adequate detection of labels: • Specific label orientation due to the lack of corresponding training data (Chiang, 2019). As suggested by Tesseract's (see also Sect. 3.4) user documentation, we addressed this issue by repeatedly rotating the input image (Tessdoc, 2020). Thus, having five input images in total (rotated through 0°, +45°, +90°, -45°, and -90° resp.), the share of true positives of all existing labels throughout the map, called recall, could be increased by about 50% (see (b) in Fig. 3). As can be seen in Tab. 1, this also applies for other maps examined. • Overlapping map elements such as textures, lines, or other labels (see examples in Fig. 2). This is assumed to be a main drawback in the course of text detection (Abdullah et al., 2015;Tofani and Kasturi, 1998). A vast amount of existing algorithms operate on the assumption that black text is in contrast to different-colored features. However, with a fluent transition between labels and other map elements of the same color their differentiation is scarcely possible within typically black and white historical maps. Due to their occasionally recurring patterns, textures are often mistakenly identified as text by automated detection processes. Tofani and Kasturi (1998), Cao and Tan (2002), , as well as Nazari et al. (2016) defined different thresholds based on connected components to distinguish between text and other map elements. This laborious task is certainly not adaptable to a large variance of maps. These further drawbacks do not or rarely appear within our presented map but may be a general challenge for text detection: • Wide character spacing. Cartographic labeling principles indicate a smaller spacing between characters compared to words Yu et al., 2017). According to Strabo's specification, the horizontal space between two characters must be smaller than the largest character so that they are connected to one word . This is not the case for e.g. 'Alter Wall' within the upper left part of our map subset illustrated in Fig. 1. • Extraordinarily curved labels. Strabo splits labels deviating substantially from a straight alignment into smaller parts in favor of an enhanced recognition of individual characters . • Differing text sizes within a label. • Low graphical quality (Abdullah et al., 2015;Yu et al., 2017;Chiang et al., 2016). Efforts to emphasize and make use of the map's whole RGB color range by linear contrast stretching (normalization) and global histogram equalization made only marginal improvements concerning the overall label detection rate (see (c) in Fig. 3 as well as Tab. 1). (Pouderoux et al., 2007) c − = 2 * * with 100% indicating perfect recall and precision The algorithm developed by frequently generates multiple bounding boxes for individual labels which rather represent an identical one. Consequently, those bounding boxes belonging to one label overlap each other. Figure 4 illustrates how this spatial relation can be used for merging the affected bounding boxes with the aim to effectively separate off each label from the input image hereafter. In view of the aforementioned causes, overlapping bounding boxes meeting at least one of the following criteria were unified in the order as listed within an iterative procedure: 1. The overlapping area between two bounding boxes is larger than 50% of the smaller bounding box' area ( Fig. 5 (1)). 2. The distance between the centroids of two overlapping bounding boxes is larger than 1.5 times the overall average bounding box height and, at the same time, the difference between their rotation angle is less than 8 degrees ( Fig. 5 (2)). To achieve the desired results, the input data was converted into a local, metric coordinate reference system before calculating each bounding box' surface area. For criteria (1), the ratio of the overlapping area between two bounding boxes to the area of the smaller one was determined. The two considered polygons were unified into a single one for ratios of at least 50%. Preliminary testing showed that an overlap of 50% or more indicates an incorrect double detection by the algorithm and therefore an identical label. This procedure was iterated until all ratios between two bounding boxes were less than 50%. Using further Python libraries such as GeoPandas, we were able to derive the coordinates of each bounding box' centroid. NumPy's mean() function helped us to determine the average of the two shortest side lengths over all bounding boxes which was assumed as their initial average height. In combination with their inclination provided by Strabo and normalized to a semicircle covering 0 to 180 degrees, these two variables could be used to find cases exceeding or falling below the thresholds defined from experience for criteria (2). Again, two bounding boxes were unified as long as they fulfilled the mentioned conditions. Figure 5: Criteria contributing to a unification of two bounding boxes: overlapping area >50% of the smaller bounding box' area (1) and centroids' distance >1.5 times the overall average bounding box height and rotation angle difference <8° (2). As a result, each label represented on the input raster map and being localized by Strabo comprised exactly one appropriate bounding box. By applying ArcPy, Esri's Python library for spatial data processing, the original input image could therefore be extracted by one of these bounding boxes respectively to generate individual text image areas. Being exported as individual raster files, they were rotated through their averaged rotation angle calculated on the basis of their original bounding boxes. This procedure was implemented to considerably improve the preconditions for the subsequent text recognition . Text recognition Available text recognition approaches do rarely achieve satisfactory results regarding raster maps so that additional steps become necessary (Milleville et al., 2020). With the help of a preliminary text detection an early knowledge about the exact location of text can contribute to systematically read content from input images such as historical maps. Combining text detection and recognition in an end-to-end approach not only improves recognition rates but also reduces computing time by focusing on text image areas solely (Ye and Doermann, 2015;Weinman et al., 2019). To convert the detected text image regions into a machine-readable format, resp. characters and strings, we used the free and open source engine Tesseract OCR 2 (version 4.1.1) which is considered as one of the most accurate tools for optical character recognition (OCR) at present (van Strien, 2020). As all labels within the utilized map subset are in German, this language specification was defined for an improved automatic text recognition by Tesseract. Additionally, each input image should be considered as a single word. The workflow shown in Fig. A2 (see Appendix) 2 Weil et al. (2020) starts with an exemplary output from Tesseract for further processing, the string 'Fisch'. String similarity Given a character string for each detected text image area, our aim was to roughly spatially assign them to the input map (Fig. 1). To strengthen the recognition confidence by retaining the one text string turned right way up, Chiang and Knoblock (2013) suggest a juxtaposition of recognized and suspicious characters. However, neither this methodology nor a comparison of similar strings between different maps (such as recommended by ) considers an appropriate ground truth. In practice, OCR results are rarely precisely identical with a potential ground truth. To attain real names of streets and places, further reference values are necessary. A great variety of existing approaches (e.g. Simon et al. (2014); Weinman et al. (2019)) are based on the comparison with a regional gazetteer. This is, in some cases, available -and therefore efficiently -only for smallscale maps. We take one step further by comparing all recognized strings to an available database 3 holding names of current streets and places within the region covered by our map example in Fig. 1, the city center of Hamburg. As certain street names designate e.g. historical events or circumstances and therefore are subject to only minor changes over long periods, this local geodataset could be used as a comparable similarity measure (Hanke, 2014). To effectively measure the similarity between two strings, namely an OCR output stringh indicating a historical street or place on the one hand and a list of current street names (stringc) on the other hand, their Levenshtein Distance was defined. We were able to identify street names which are likely to be identical in historical and recent maps by applying two different methodologies with the help of the python library fuzzywuzzy, which implements the Levenshtein algorithm: • ratio computes the number of character edits (adding, erasing, and replacing) which have to be done to transform stringh to stringc (Yu et al., 2017) and • partial ratio computes the similarity of the shorter substring stringh within parts of the longer stringc. Here, both measures appeared to be of equal value as both individual characters might be recognized incorrectly (→ ratio) and only parts of strings might be identified (→ partial ratio). The output values are defined in percentage ranging from 0 (no similarity) to 100 (identical). Figure A2 (see Appendix) gives several output examples including their percentage value of accordance for the input stringh 'Fisch'. A low score can be an indication of either a poor OCR outcome or a great difference between the historical and current street names stringh and stringc. Additional rules being based on various own findings were defined to exclude each stringh from further processing having few (<75%) or multiple identical matching values for stringc. On the basis of a defined threshold of 75%, the street names matching those stringh were continued to be used as control points for a subsequent georeferencing. Approximate georeferencing The dataset 3 used for allocating current street names helped to perform an initial rough georeferencing of the historical map subset. Since each street within the mentioned geodataset consists of a variable number of linestrings, we defined different rules to find their centroids each representing a street's approximate point-like location: When consisting of only one linestring, the point at half-length was assumed to represent the street's centroid. For those streets comprising two linestrings, the interpolated point at half-length over both lines was specified as the corresponding centroid. For each street being represented by more than two lines, we built the centroid of their common rectangular bounding box. As the bottom section of Fig. A2 (see Appendix) illustrates, these labelled points served as control points for a georeferencing via affine transformation. Experimental results and evaluation This section points out the results of our methodology as presented in Sect. 3. We primarily conducted tests with the map subset shown in Fig. 1 and complemented other input as necessary. Text detection For the generation of bounding boxes each holding an individual text image area Strabo works best with RGB input images. Own tests confirmed the findings of other authors that there is no difference between lossless PNG and JPEG with smallest possible compression (at least 93% image quality (Mansurov, 2018)) using as an input data format (Milleville et al., 2020;Li et al., 2019). Our results in Tab. 1 reveal that the increase of the label detection rate was not as stark as that of Wilson (2020) when expanding an image's spatial extent. Various challenges arose when working with Strabo. Due to their frequently similar visual characteristics, the algorithm does not differ between text and similar graphical elements such as textures or edges of map objects, particularly between those being of the same color. Suggested solutions to separate between isochromatic text and lines, such as the inclusion of connected components, may cause negative effects regarding the detection rate . To facilitate further processes -in particular text recognition and string similarity -the number of detected labels could be increased by own adaptions which were already presented in Sect. 3.3. As shown in Fig. 3 (b), rotating the input image lead to a perceptible increase in the number of correctly found text elements. In reference to a ground truth, the recall could be improved from 41% regarding the original map to 58% after combining it with rotated images through +45, +90, -45, and -90 degrees respectively (Pouderoux et al., 2007). Table 1 shows that examinations with further map subsets revealed an improved recall by up to 50% through this procedure. Initial image enhancements such as linear contrast stretching and global histogram equalization could contribute once more to an improved recall of 66% when regarding Fig. 1. A slight increase of elements falsely detected as text (false positives) and therefore a decrease in the overall precision can be observed in Fig. 3 (c) as well as Tab. 1. As these did not affect the averaged accuracy measure f-score to a high degree, we used the combined input consisting of original, rotated, and enhanced images for further processing. An accurate localization of all text areas is not necessary since the final affine transformation requires only three ground control points. Text recognition Utilizing the derived and unified bounding boxes, the occurrence of text elements within the map could precisely be located. This enabled an improved reading of labels from the input map, the text recognition. As can be seen from Fig. A1 in Appendix, our workflow includes an extraction of all text image areas before bringing those to a horizontal orientation. Our experiences revealed that Tesseract is incapable of reading text being rotated 10 degrees and more. Recognizing rotated text is an ongoing and still not solved challenge in OCR (Ye and Doermann, 2015;Yu et al., 2017). However, map labels within the bounding boxes might be oriented in two directions. Firstly, right side up in a readable form and secondly, upside down, rotated 180 degrees. The cropped text image areas were consequently rotated through the rotation angle of their associated bounding boxes on the one hand and additional 180 degrees on the other hand. were also given for further tested input maps. Although Tesseract generally assumes a clean, plain input image and its model is trained on specific typefaces, interfering artifacts such as parts of lines, textures, and other map elements did not considerably deteriorate the outcomes (Rosebrock, 2018b). Matching to current data Several concurring names could be identified between historical and current streets and places. After applying fuzzywuzzy's (partial) ratio the previously derived centroids (see Sect. 3.3) of Tesseract's output on the one hand and the local geodataset 3 including current street names on the other hand could be matched in a satisfactory manner for our map example (Fig. 1). As seen in Tab. 2, the average Levenshtein Distance of matching strings such as Adolphsbrücke, Hopfenmarkt, Schopenstehl, or Speersort exceeded our defined threshold of 75%. We could continue to use those labels having high matching rates and a good distribution over the raster map. In combination with their centroids they served as reference points for a subsequent allocation of all remaining streets as well as for an initial rough georeferencing of the historical map. By assigning street labels to specific locations within the map, the meaning and context (semantics) of those could be specified (see Fig. 6). Conclusions and outlook This study can be understood as a proof of concept for an automated end-to-end workflow to extract labels from large-scale historical maps. Our findings that detection and recognition rates are generally low (<80% and <60% on average respectively) are broadly consistent with Weinman et al. (2019) and point out necessary improvements for machine learning approaches (Ye and Doermann, 2015). By combining tools addressing text detection, recognition, and string similarity with further adjustments we were able to not only increase the overall recognition rate but also to provide a base for useful ancillary information such as the names of streets and places. This may be considered a promising aspect of searchable and analyzable historical maps. Furthermore, a georeferencing, which is frequently lacking for historical maps, could roughly be made. For best results, those labels having highest similarity rates and an appropriate scattering over the map should be considered as reference points. A great benefit may be a resulting facilitated comparison between different maps such as between historical and current ones. We demonstrated the possibility of transferring the suggested approach to a variety of maps due to omitting individual adjustments. Nevertheless, disturbing factors such as interfering artifacts from building corners, textures, or map grids may occur and can therefore still be challenging for different maps. Further testing with additional maps might be helpful to specify and minimize the sources of disturbance more precisely. To improve the overall accuracy of the presented approach, we suggest connecting identified single words to complete map labels. This may be achieved by looking closely at the adjacency and similarity of rotation angles of detected text image areas. Also, map labels covering multiple lines should be considered. The certainty of true positives may therefore be increased for all substeps within our comprehensive approach. Future research might continue to use our results to label further map features and to assign those to their related geometries. The identification of geometries such as from streets, buildings, or waterbodies may be facilitated by a preceding elimination of all detected labels within a map. Segmenting and classifying map objects based on their different properties could support the establishment of ancillary, informative databases and therefore enable the analyzability of historical maps. With this kind of feature matching, not only further map objects might be identified but also a more intuitive comparison between historical and current maps would become possible. Data and software availability All research data and applications produced and applied within this publication can be found at https://doi.org/10.5281/zenodo.4721174 (Schlegel, 2021). The repository is structured following Sect. 3 of this paper. The results were generated using QGIS Desktop 3.16.0 (approximate georeferencing, Sect. 3.6), the command prompt in Windows 10 OS (Tesseract OCR, Sect. 3.4), the Linux (Ubuntu 18.04) command line via Windows-Subsystem for Linux (Strabo, Sect. 3.2), as well as several Jupyter Notebooks (additional adjustments, Sect. 3.3 and string similarity, Sect. 3.5) written in Python. These scripts are available under the GNU GPLv3 license. The workflow underlying this paper was partially reproduced by an independent reviewer during the AGILE reproducibility review and a reproducibility report was published at https://doi.org/10.17605/osf.io/anv9r. Iosifescu, I., Tsorlini, A. and Hurni, L.: Towards a comprehensive methodology for automatic vectorization of raster historical maps, e-Perimetron., 11(2), 57-76, 2016. Figure A2: Workflow for matching historical to similar current street names with the aim to perform a rough georeferencing.
6,487.6
2021-06-04T00:00:00.000
[ "Computer Science" ]
LinkImputeR: user-guided genotype calling and imputation for non-model organisms Background Genomic studies such as genome-wide association and genomic selection require genome-wide genotype data. All existing technologies used to create these data result in missing genotypes, which are often then inferred using genotype imputation software. However, existing imputation methods most often make use only of genotypes that are successfully inferred after having passed a certain read depth threshold. Because of this, any read information for genotypes that did not pass the threshold, and were thus set to missing, is ignored. Most genomic studies also choose read depth thresholds and quality filters without investigating their effects on the size and quality of the resulting genotype data. Moreover, almost all genotype imputation methods require ordered markers and are therefore of limited utility in non-model organisms. Results Here we introduce LinkImputeR, a software program that exploits the read count information that is normally ignored, and makes use of all available DNA sequence information for the purposes of genotype calling and imputation. It is specifically designed for non-model organisms since it requires neither ordered markers nor a reference panel of genotypes. Using next-generation DNA sequence (NGS) data from apple, cannabis and grape, we quantify the effect of varying read count and missingness thresholds on the quantity and quality of genotypes generated from LinkImputeR. We demonstrate that LinkImputeR can increase the number of genotype calls by more than an order of magnitude, can improve genotyping accuracy by several percent and can thus improve the power of downstream analyses. Moreover, we show that the effects of quality and read depth filters can differ substantially between data sets and should therefore be investigated on a per-study basis. Conclusions By exploiting DNA sequence data that is normally ignored during genotype calling and imputation, LinkImputeR can significantly improve both the quantity and quality of genotype data generated from NGS technologies. It enables the user to quickly and easily examine the effects of varying thresholds and filters on the number and quality of the resulting genotype calls. In this manner, users can decide on thresholds that are most suitable for their purposes. We show that LinkImputeR can significantly augment the value and utility of NGS data sets, especially in non-model organisms with poor genomic resources. Electronic supplementary material The online version of this article (doi:10.1186/s12864-017-3873-5) contains supplementary material, which is available to authorized users. thousands of genetic markers to be discovered and genotyped across a large number of samples in a single step (e.g. [8]). However, these methods also result in significant amounts of missing genotype data when compared to previous technologies like SNP arrays [9]. Nearly all studies that make use of genome-wide genotype data first fill in the missing genotypes using genotype imputation [10]. By inferring missing genotypes, not only does imputation result in a more complete table of genotype data, but it can also improve the power of downstream analyses, such as Genome-Wide Association Studies (GWAS) [11]. Most existing genotype imputation methods, including MaCH [12], fastPhase [13], IMPUTE2 [14] and our existing method, LinkImpute [15], use patterns from known genotypes to impute missing genotypes. These known genotypes are usually inferred prior to imputation using separate genotype calling software such as GATK [16], SAMtools [17,18] or TASSEL [19]. These pipelines only infer a genotype when, due to the quantity and quality of the sequence reads, there is sufficient confidence in the inferred genotype (e.g. [8]). In cases where confidence in the genotype call is not sufficient, a genotype is not inferred, and the genotype is set to missing. Thus, although genotypes set to missing may have supporting sequence reads that provide some information about the correct genotype, this information is ignored and excluded from down-stream analyses, including imputation. It has been demonstrated that the use of sequence reads can improve imputation accuracy and the exploitation of this information has been incorporated into several imputation packages including Beagle [20], findhap [21] and STITCH [22]. However, all of these software packages require markers to be ordered and are thus restricted to organisms with high-quality reference genomes. Here we introduce LinkImputeR, a novel imputation method that exploits sequence read information to perform both genotype calling and imputation. Like its predecessor, LinkImpute [15], it is designed for non-model organisms since it requires neither ordered markers nor a reference panel of known genotypes. Most importantly, LinkImputeR enables the user to investigate the effects of missingness and read depth thresholds on the size and accuracy of the resulting genotype table. We provide several metrics supporting the quality and speed of our algorithm using genome-wide SNP data from apple, cannabis and grape. Implementation In order to incorporate read count information into imputation, LinkImputerR first infers genotypes from read counts using a simple likelihood calculation. It then uses the LD-kNNi algorithm [15] to impute the genotypes that fall below a chosen read count threshold. Finally, LinkImputeR combines information from the likelihood calculation and imputation result to produce a called genotype. LinkImputeR optimizes the parameters used in each of these steps to maximize accuracy. Each of these steps is described in more detail below. Each step produces a probability for each of the three possible genotypes at a bi-allelic marker in a diploid organism which we refer to as the "inferred probabilities", the "imputed probabilities" and the "called probabilities", respectively. We refer to the genotype with the greatest probability in each case as the "inferred genotype", the "imputed genotype" and the "called genotype", respectively. In this work we only consider biallelic markers, although the methods introduced here could be generalized to work with multiallelic SNPs. Whenever we refer to linkage disequilibrium (LD) we are referring to LD calculated using a simple r 2 correlation. Inferring genotypes We use the calculation from TASSEL 5 [19] to infer genotypes from read counts. For each genotype, g ∈ {0, 1, 2}, we calculate the likelihood, L g , of seeing the observed read counts if that is the true genotype: where r R is the number of reference reads, r A is the number of alternative reads and e is the error rate. f (k; n, p) is the probability mass function of the binomial distribution. For this study we set the error rate, e, to 0.01, the same as TASSEL 5. From the likelihoods we calculate the probability of each genotype, p n g : Imputing genotypes In our previous paper [15] we introduced LD-kNN Imputation. Here we modify this algorithm to produce a probability for each genotype, rather than only the most likely genotype. We infer genotypes for those with a read depth greater than a threshold, d, and then use these to impute the remaining genotypes. To impute a genotype at SNP a in sample b, LD-kNNi first uses the l SNPs most in LD with the SNP to be imputed in order to calculate a distance from sample b to every other sample for SNP a (see [15] for full details of this step). The algorithm proceeds by picking the k nearest neighbours to b that have an inferred genotype at SNP a and then scoring each of the possible genotypes, c g , as a weighted count of these genotypes: where N is the set of k nearest neighbours and d l (b, s) is the distance between the sample b and a nearest neighbour s. h(s, a) is the known genotype at SNP a in sample s and I(h(s, a) = g) is an indicator function that takes the value 1 if h(s, a) = g and 0 otherwise. From the score of each genotype, we calculate the imputation probability, p m g , as: As in LinkImpute, LinkImputeR optimizes the values of k and l so as to obtain the greatest accuracy. Details on accuracy estimation are below. Calling genotypes We make final genotype calls by combining the inferred and imputed genotype probabilities. We calculate the called probability of genotype g, p c g , as: where w is a weighting factor controlling for how much the inferred and imputed genotypes should be weighted. w will depend on the sample. For example, if the data were collected from a large number of closely related samples, genotype accuracy may be higher if the imputation probabilities were weighted, higher since the imputation is likely of higher quality. LinkImputeR optimizes the value of w by testing values between 0 and 1 in increments of 0.01 in order to obtain the greatest accuracy. When optimizing the value of w, the set of masked SNPs employed is different from that used to optimize the values of k and l used in the imputation step. Investigation of the effect of w showed that the effect on accuracy was not unimodal and as such more efficient search algorithms may not find the true optimum (data not shown). We only impute SNPs with fewer reads than the threshold, d, and therefore combining inferred and imputed probabilities has no effect for genotypes with more reads than the threshold. Accuracy estimation To estimate accuracy we mask read counts from 'known' genotypes (10 000 for apple and cannabis; 5 000 for grape) at random from across the dataset without replacement. We consider a genotype to be known if it has a read depth ≥ 30, in which case its known genotype is also the inferred genotype using the above methodology. Accuracy is then defined as the proportion of masked genotypes where the 'known' and called genotypes are the same. To ensure that the read depth distribution of the genotypes we mask reflects the read depth distribution in the data set, we perform the following sampling procedure. First, we calculate the distribution of read depths for genotypes with a read depth ≤ d a . This depth threshold, d a , is different from the depth used elsewhere in this study, d, to allow a fair comparison between different values of d. For example, if we compare results from d = 2 to d = 8, we need to compare our accuracy for genotypes with read depths up to and including 8. From the distribution, we draw a depth at random. We uniformally sample reads to be removed at random until this depth is achieved for the masked genotype. We repeat this process for each masked genotype, ensuring that the read depth distribution of the genotypes used in our accuracy calculation will be the same as in the data set as a whole. We then mask and impute each of the chosen genotypes individually, keeping all the other chosen genotypes unmasked. For simplicity, when calling genotypes, we assume that genotypes with a read depth > d a are inferred correctly when calculating accuracy. For this study we set d a to 8 as this is the maximum value of d we test. We reason that, at read depths greater than this threshold, the inferred genotype is always more likely to be the correct genotype when different from the imputed genotype. However, it may be that the inferred genotype is incorrect, so we use a much higher threshold (30 in this case) when choosing genotypes to mask. The accuracies reported by LinkImputeR are calculated using a different, test, set of SNPs to the training sets used to optimise k/l and w. Since the datasets being called are different for every case different test and training sets are used. It is worth noting that the SNPs used to calculate accuracies are different from the SNPs used to optimize k, l and w. Also, although we report accuracy here, we also calculate the correlation between imputed and actual genotypes where both are centred to alleviate the effects of MAF [23]. LinkImputeR reports both the accuracy and correlation regardless of which is used for optimization. Data Here we analyze apple [24] and grape [25] GBS data from our previous study [15] and also include GBS data from cannabis [26]. We use the TASSEL 5 pipeline [19] to generate SNPs from all three datasets since TASSEL 5 infers genotypes using the same method as we do in this study. We use default TASSEL 5 parameters throughout and use bwa [27] as the aligner using the parameters recommended in the TASSEL documentation. The reference genomes used were the Malus domestica reference genome version 1.0p [28], canSat3 C. sativa reference genome assembly [29] and the 12X V. vinifera reference genome [30,31]. It is likely that 10-20% of SNPs in the apple data set have the wrong physical coordinates because of the poor quality of the apple reference genome [8] and the cannabis genome sequence employed here remains largely unassembled. LinkImputeR is well-suited for these cases since it does not require ordered genetic markers. Similarly, it is well suited for use in cases where SNPs are called without the use of a reference genome (e.g. [32]). Table 1 summarizes the number of SNPs and samples for each dataset. LinkImputeR As well as performing the inference, imputation and calling steps described above, LinkImputerR also allows the user to examine the effects of various read depth thresholds, d, and additional data quality filters. It will then calculate accuracy for each combination of filters and read depth. The filters implemented in LinkImputerR are minor allele frequency, missingness by both SNP and sample and deviation from Hardy-Weinberg equilibrium using a simplified version of the method from [33]. Further details on the implementation of each of these filters can be found in Additional file 1. Once accuracy has been calculated for each combination of filters and depth, a summary file is produced reporting the accuracy as well as the number of SNPs and samples for each case. A more detailed output can also be requested. The user can apply one, or more, of these cases to their dataset For this study, we first applied a MAF filter of 0.05 using a read depth threshold of 8 and a Hardy-Weinberg equilibrium test with an error rate of 0.01 and a significance level of 0.01 corrected for multiple testing using the Bonferroni correction. LinkImputeR was run on the Glooscap cluster operated by ACENET (http://www.ace-net.ca/). This cluster consists of Dual-core, Quad-core and 8-core AMD Opterons with 32, 64 or 128 GB of RAM. All machines run Red Hat Enterprise Linux 6.4. The run time to calculate accuracy for all the cases considered is also listed. 10 000 SNPs were masked for the apple and cannabis datasets, 5 000 for the grape dataset Read depth and missingness thresholds To investigate the effect of read depth and missingness thresholds on imputation accuracy, we tested read depth thresholds between 2 and 8 and missingness thresholds between 0.1 and 0.7 in increments of 0.1. We set sample and SNP missingness to be the same for each case and filtered for SNP missingness before filtering for sample missingness. A genotype is considered non-missing, for the purpose of the missingness filters, if it has more reads than the read depth, d. For genotypes with a read depth > d, we do not calculate an imputed genotype but rather assign it the inferred genotype. Due to the small size of the resulting dataset it was not possible to test a missingness value of 0.1 on the grape dataset. For the remainder of this paper we will refer to a single case using the format read depth threshold/missingness threshold. For example, 8/0.2 refers to the case where the read depth threshold is set to 8 and both SNP and sample missingness are set to 0.2. Genome-wide association study (GWAS) We aimed to ensure that using low read counts and high levels of missingness would not result in spurious results when performing genetic mapping. To investigate this, we perform a GWAS on apple skin color for four extreme cases (2/0.2, 2/0.7, 8/0.2 and 8/0.7). We used publicly available phenotype data for skin color intensity in Malus domestica to perform GWAS. Phenotype data were downloaded from the USDA Germplasm Resources Information Network (GRIN) website [34]. Skin color was measured as the percentage of overcolor (generally red) on a fruit. We retained a single average value for clonally related accessions and combined measurements across years as in [24]. Genome-wide association was performed using EMMAX [35]. The k-matrix was generated in EMMAX using the default command given in the documentation. We corrected for relatedness using the k-matrix without any additional covariates. Read depth and missingness thresholds We first calculated accuracy for each of the different cases, i.e. combinations of read depth and missingness thresholds, for all three datasets. Displaying every possible case graphically resulted in plots that were too cumbersome to interpret. Thus, for each dataset, we include only "good cases", where there is no other case with at least the same number of SNPs and samples and a higher accuracy. Figure 1 summarizes the good cases for the apple dataset. Cases with a combination of high read depth threshold and low missingness threshold generally give the highest accuracy, but also result in the lowest number Additional file 3 shows the equivalent figure for the cannabis dataset. The same trade-off occurs in both cannabis and apple: as read depth threshold and missingness thresholds are relaxed, accuracy decreases while the number of SNPs and samples retained increases. In this instance, of the twenty good cases, four have a missingness threshold of 0.7 and two have a threshold of 0.2. The equivalent figure for the grape dataset is visible in Additional file 4. As in apple and cannabis, when the read depth threshold decreases, the number of SNPs and samples increases and accuracy decreases. All seven good cases have a missingness threshold of 0.7. For the remainder of this paper, we focus on missinginess levels of 0.2 and 0.7 and compare results between these two extreme missingness levels. We chose a high missingness level of 0.7 since it frequently occurred in the groups of good cases and because it is unlikely that users will want to include SNPs or samples with >70% missing data when calling and imputing SNPs. We chose a missingness level of 0.2 for comparison because it commonly occurs in the group of best cases in the apple dataset, and it is a frequently chosen threshold in other studies (e.g. [8,36]). We did not include 0.1 due to the results in apple and grape that made the resulting figures difficult to interpret. Full results for all cases are in Additional files 5, 6 and 7. Final dataset size We find that the filters chosen have significant effects on the resulting number of SNPs and samples retained for downstream analyses. In both the cannabis and apple datasets, the case with the largest number of SNPs has approximately 12 times the number of SNPs than the case with the smallest number of SNPs. For the grape dataset, there is a 162 fold difference in number of SNPs between the most stringent and lenient genomic filters examined. The number of samples remaining after applying the filters presents a more complicated pattern than the number of SNPs, likely due to the use of the SNP missingness filter prior to applying sample thresholds. The difference between the number of samples retained at a missingness-by-sample threshold of 0.7 was only 1.13, 1.23 and 1.20 times higher than the missingness threshold of 0.2 for apple, cannabis and grape, respectively. Accuracy The genotype calling accuracy behaved similarly across missingness thresholds in both the cannabis and apple datasets. In both cases, a missingness threshold of 0.2 results in a higher accuracy than a threshold of 0.7. This result is reversed in grape where a threshold of 0.7 has the highest accuracy. For all three datasets, no consistent result is seen for read depth threshold. The result from the grape dataset is consistent with that previously reported for soybean [9] where allowing SNPs and samples with higher levels of missingness did not result in a decrease in genotype calling accuracy. As the result from the grape dataset is not in line with the results from the apple and cannabis datasets, we investigated how the grape dataset may differ from the other two datasets in a way that could affect calling accuracy. Additional file 8 shows the average LD of the SNP of interest with each of the twenty SNPs in highest LD with it, which is a crucial value likely to affect the calling accuracy. Indeed, the profile for the grape dataset differs rather dramatically from the profile of the other two datasets. Figure 2 summarizes the accuracy obtained by simply inferring genotypes (regardless of read depth), by imputing genotypes with fewer reads than the threshold, and by calling genotypes by combining the inferred and imputed probabilities. It is worth noting that, due to the way the inferred and imputed results are combined, it is unlikely, within the bounds of sampling error, that the called accuracy is less than either the inferred or imputed accuracies. This is because it is possible for the called genotype to be based entirely on the inferred (w = 1) or imputed (w = 0) genotype if this is the optimal solution. Again results using correlation show a similar pattern (Additional file 9). For the apple and cannabis datasets the called genotypes show a noticeable increase in accuracy over either the imputed or inferred genotypes. This increase is more noticeable at higher read depths with increases of several percent at a read depth of 8 (apple -8/0.2 = 2.9%, apple -8/0.7 = 2.5%, cannabis -8/0.2 = 3.4%, cannabis -8/0.7 = 3.6%). Read count effect Results for grape are different than for the other datasets with imputed genotypes having nearly identical accuracy to the called genotypes. The likely cause of this difference is the different LD profile in grape discussed previously (Additional file 8). Finding SNPs in high LD is a key element of LD-kNNi so it is not surprising that different LD profiles would have a significant effect on imputation accuracy. For the other levels of missingness, results are similar across all three datasets (Additional files 10, 11 and 12). Figure 3 shows the results of a GWAS for apple skin color on chromosome 9 across four different combinations of missingness and depth thresholds. As the number of total SNPs included in the analysis increases, the number of "hits" (i.e. SNPs with a significant association with the phenotype) also increases. These additional SNPs are all close to the known locus for apple skin color around position 32.8 MB on Chromosome 9 [8,37]. Figure 3 suggests that use of a greater number of SNPs and thus an increase in the use of imputation, does not result in spurious associations for apple skin color. However, the GWAS results across all chromosomes (Additional file 13) show a possible spurious hit on chromosome 3 for skin color, where no locus for skin color is known to exist. Further investigation of this hit revealed that it likely resulted from a misassembled reference genome sequence: the SNPs involved are in high LD with the SNPs on chromosome 9 that are close to the known locus and in low LD with nearby SNPs on chromosome 3 (Additional file 14). Past studies have found between 10-20% SNPs are incorrectly anchored to the apple reference genome used in the present study [8,38]. Table 1 shows the time required to compute the accuracy across all read depth and missingness thresholds for all three datasets. The observed values varied between approximately 6.8 h for apple and 13.5 h for grape. Figure 4 shows the time required to call the complete dataset for each case. Run time varies between 2 min (cannabis -8/0.2) and 17 h (grape -2/0.7). Run time is under an hour and a half for every apple and cannabis case examined. The relatively slow runtime of grape is likely due to the relatively large number of imputed SNPs. LinkImputeR performance The core imputation algorithm of LinkImputeR has a run time that scales with the square of both the number of SNPs and the number of samples. However, due to the other parameters involved, for example the effect of the filters on the dataset or the number of neighbours used in the imputation algorithm, run time is likely to be variable even between datasets with similar numbers of SNPs and samples. Direct comparison with other imputation methods is difficult as LinkImputeR performs steps that are normally carried out before imputation. In the cases reported here, it filters for missingness, infers and imputes genotypes. However run times compare favorably to those reported for LinkImpute and Beagle [15]. Discussion To call genotypes from the read counts generated by NGS, a read depth threshold is needed below which we cannot confidently call a genotype. Most studies use a threshold on the number of reads, although there is no consensus on what the threshold should be. For example, previous work on apple required a minimum read depth of 6 [8], cannabis used a depth of 10 [26], while work on alfalfa used a threshold of 30 [39]. NGS also produces data with a large amount of missing data. It is standard to remove samples or SNPs with a large amount of missing data, however there is no consensus on what missingness thresholds should be used. For example, previous work on cannabis and apple filtered for SNPs with greater than 20% missingness by SNP [8,26], while work on sorghum filtered SNPs with more than 40% missing data [40]. Some efforts have been made to reduce the amount of missing data from GBS using specific combinations of restriction enzymes [41], but even highly optimized assays will produce significant amounts of missing data in the resulting genome-wide genotype data. A previous study by Torkamaneh and Belzile [9] investigated the effect of missing data thresholds on imputation. However, this work was performed on a single species and exploited a reference panel of genotypes for the purposes of imputation. Reference panels are not available for most species including those studied here. LinkImputeR also offers the advantage of not requiring a high quality reference genome, making it suitable for non-model organisms. The desired quality and size of a genome-wide genotype data set will differ according to the type of analysis to be performed, the genetics of the organism under study and the preferences of the researcher. For some downstream analyses, a large number of low quality markers may be preferred, whereas a smaller number of high-quality markers may be more important in other cases. Currently, there is no rapid and simple way to study the effect of different thresholds on dataset size and imputation accuracy without repeating the entire filtering and imputation pipeline. With large datasets, this process would be prohibitively time consuming. Using LinkImputeR, we compare three datasets and find that it is difficult to generalize across organisms what filters should be used before imputation. For both the apple and cannabis datasets, imputation was most accurate after a low missingness threshold filter was applied, but the reverse was true for grape (Figs. 1 and 2, Additional files 3 and 4). The contrasting behavior between datasets is likely due to the different LD profiles of the organisms studied here (Additional file 8). An additional complication when deciding on the desired size and quality of the resulting genotype data is that different downstream analyses may have different requirements. LinkImputeR allows for the effects of different thresholds on the quality and size of a genotype table to be calculated quickly (Table 1) and then allows the user to select whatever thresholds they find most suitable for their purposes (Fig. 2). After selecting thresholds, the process of imputation in LinkImputeR proceeds at a speed that is comparable to existing algorithms. Moreover, the results of performing a GWAS (Fig. 3, Additional files 13 and 14) suggest that, even on datasets with high levels of missingness, imputation is not introducing spurious genotype-phenotype associations. In fact, we anticipate that in many applications, imputing large numbers of genotypes will enable more precise localization of causal loci by enabling an increase in mapping resolution. Incorporating read depth information often improves the performance of LinkImputeR (Fig. 4, Additional files 10, 11 and 12). The effect of improvement depends crucially on the read depth threshold implemented: the effect is most noticeable at high read depth thresholds. The reason for this observation lies in the difference between the information about the true genotype contained in the reads used to infer the genotype versus the information from other samples used to impute the genotype. For example, for genotypes with a read count above the read depth threshold, we simply used the inferred genotype. Only genotypes with a number of supporting reads falling below the read depth threshold were called using a weighted combination of the inferred and imputed probabilities. Since genotypes with a small number of supporting reads provide only a small amount of information about the true genotype, we observe no significant increase in accuracy when the read depth threshold is low. The increase in accuracy afforded by LinkImputeR is therefore more significant when the read depth threshold is higher. LinkImputeR allows optimization based on correlation rather than on accuracy. A similar pattern of results is found using both methods (Figs. 1 and 2, Additional files 2 and 9). While LinkImputeR provides users with the ability to investigate the effects of various thresholds on the accuracy and size of their genotype data, it does not implement a fully probabilistic algorithm in its current form. Also, LinkImputeR can currently be applied only to bi-allelic markers. These two limitations warrant further investigation since overcoming them promises to improve even further the number and quality of genotypes that can be generated from NGS technologies. Conclusions All existing genotyping methods produce missing genotype data and filling in these missing genotypes via imputation is a crucial step in nearly all genomic studies. Most existing genomic studies use arbitrary quality and read depth thresholds without investigating how these filters affect the quality and size of the resulting genotype data. We have shown that the effect of these filters can be significant and can vary considerably between sets of samples with varying degrees of genetic diversity, LD and population structure. Using LinkImputeR, researchers can now investigate a range of quality thresholds prior to imputation and determine what set of parameters best suit their research needs. In addition, LinkImputeR exploits read count information that is usually ignored, which increases the accuracy of the resulting genotype data. Thus, LinkIm-puteR is a valuable tool for generating large, high-quality genome-wide genotype data, especially from non-model organisms.
7,106.4
2017-07-10T00:00:00.000
[ "Biology", "Computer Science" ]
Analysis on structured stability of highly nonlinear pantograph stochastic differential equations ABSTRACT This paper investigates the structured stability and boundedness of highly nonlinear hybrid pantograph stochastic differential equations (PSDEs). The main contribution of this paper is to take the different structures into account to establish the structured robust stability and boundedness for highly nonlinear hybrid PSDEs. The theory established in this paper is applicable to hybrid PSDEs which may experience abrupt changes in both structures and parameters. Introduction Stochastic delay differential equations (SDDEs) are widely used to model systems which do not only depend on present states but also involves past states. Robust stability and boundedness are two of most popular topics in the area of systems and controls, most of the papers can only be applied to delay systems where their coefficients are either linear or nonlinear but bounded by linear functions (see, e.g. Deng, Fei, Liang, & Mao, 2019;Wu, Tang, & Zhang, 2016). However, the linear growth condition is usually violated in many practical applications. Recently, there are some progress on stability for highly nonlinear stochastic delay systems (see, e.g. Deng, Fei, Liu, & Mao, 2019;Fei, Hu, Mao, & Shen, 2019;Hu, Mao, & Shen, 2013;Liu & Deng, 2017;. Particularly, Hu, Mao, and Zhang (2013) were first to investigate the robust stability and boundedness for SDDEs with Markovian switching without the linear growth condition. Fei, Hu, Mao, and Shen (2017) established stability criteria for delay dependent highly nonlinear hybrid stochastic systems. Pantograph stochastic differential equations (PSDEs) are unbounded delay stochastic differential equations which have been frequently applied in many practical areas, including biology, mechanic, engineering and finance. Baker and Buckwar (2000) established the existence and uniqueness of the solution for the linear stochastic pantograph equation. Shen, Fei, Mao, and Deng (2018) discussed exponential stability CONTACT Mingxuan Shen<EMAIL_ADDRESS>of highly nonlinear neutral PSDEs by Lyapunov functional and M-matrix. Liu and Deng (2018) investigated pth moment exponential stability of highly nonlinear neutral PSDEs which driven by Lévy noise. As we know, the hybrid systems driven by continuous-time Markov chains are often used to model systems that may experience abrupt changes in their structures and parameters caused by phenomena such as component failures or repairs (see, e.g. Mao & Yuan, 2006;Shen, Fei, Mao, & Liang, 2018;Zhou & Hu, 2016). The theory in Hu, Mao, and Zhang (2013) is good at dealing with the hybrid SDDEs that may experience abrupt changes in their parameters. You, Mao, Mao, and Hu (2015) show that a given stable hybrid PSDE can not only tolerate the linear perturbation but also the nonlinear perturbation without loss of the stability, while most of the papers could only cope with the linear perturbation. However, most of references on hybrid systems have dealt with subsystems with the same structures. In fact, stochastic systems may experience changes not only in their coefficients but also in their structures. Fei, Hu, Mao, and Shen (2018) first took the different structures in different modes to develop a new theory on the structured robust stability and boundedness for highly nonlinear hybrid SDDEs. But the theory of Fei et al. (2018) cannot applied directly to highly nonlinear hybrid PSDEs which experience abrupt changes in their structures. Motivated by the above discussion, this paper will study exponential stability of a class PSDEs which experience abrupt changes in their structures. Notation and assumptions Throughout this paper, unless otherwise specified, we use the following notation. We denote by |x| the Euclidean norm for x ∈ R n . If A is a vector or matrix, its transpose is denoted by A T . If A is a matrix, its trace norm is denoted by |A| = trace(A T A). If both a and b are real numbers, then a ∨ b = max{a, b} and a ∧ b = min{a, b}. If G is a set, its indicator function is denoted by I G . That is, be a complete probability space with a filtration {F t } t≥0 satisfying the usual conditions (i.e. it is increasing and right continuous while F 0 contains all P-null sets). Let We assume that the Markov chain r(·) is independent of the Brownian motion B(·). Consider an n-dimensional hybrid SDDE on t ≥ 0, where the coefficients f : R n × R n × R + × S → R n and g : R n × R n × R + × S → R n×m are Borel measurable and 0 < θ < 1 with initial date x(0) = x 0 ∈ R n . Moreover, assume that f (0, 0, t, i) = 0 and g(0, 0, t, i) = 0 for all (t, i) ∈ R + × S. For the convenience of the reader, let us cite some useful results on M-matrices. For a vector or matrix A, by A > 0 we mean all elements of A are positive. A Z-matrix is a square matrix A = (a ij ) N×N which has non-positive off-diagonal entries. Lemma 2.1: Let A = (a ij ) N×N be a Z-matrix. Then A is a nonsingular M-matrix if and only if one of the the following statements holds: (1) A −1 exists and its elements are all nonnegative. (2) There exists x > 0 in R N such that Ax > 0. The well-known conditions imposed for the existence and uniqueness of global solution are the local Lipschitz condition and the linear growth condition (see, e.g. Mao, 2007). To be precise, let us state the local Lipschitz condition. Assumption 2.2: For each integer h ≥ 1 there is a positive constant K h such that for those x, y,x,ȳ ∈ R n with |x| ∨ |y| ∨ |x| ∨ |ȳ| ≤ h and all (t, i) ∈ R + × S. However, we do not state the linear growth condition as we here is to study the structured robust stability and boundedness for highly nonlinear PSDEs which do not satisfy this condition. Boundedness and stability Set and where ρ is a free positive parameter. By the definitions of λ i and ζ i , we see that all λ i and ζ i are positive. Assumption 3.1: Choose ρ > 0 sufficiently small such that where λ i and ζ i have been defined by (4) and (5). Assume also that and Remark 3.2: Letb be the maximum of the row sums of Then we have for all i ∈ S. Proof: By the definiton of V(x, i), we can see that By the generalized Itô formula, we have that By inequality |x T g(x, y, t, i)| 2 ≤ |x| 2 |g(x, y, t, i)| 2 , we have By Assumption 2.3, we can get By (4) and (5), we have Consequently, By the Young inequality, we have We hence obtain from (13) that, for i ∈ S 1 , Similarly, for i ∈ S 2 , we can show that But, by condition (6), we have Hence By condition (7) and the Young inequality, we then obtain that, for i ∈ S 2 , Combining (7), (14) and (17), we see that, for all i ∈ S, By conditions (8) and (9), we haveρ < ρq/((q − 2)θ + q − p + 2). By the definitions of ρ 1 and ρ 2 , we have ρ 1 > 0 and ρ 2 > 0. Noting that we obtain from (18) that, for all i ∈ S, Thus the proof is complete. and where H 1 and H 2 are positive constants. Proof: Since the coefficients of the hybrid PSDE (1) are locally Lischitz continuous, for any given initial date , whereσ ∞ is the explosion time. Let k 0 > 0 be a sufficiently large integer such that |x 0 | < k 0 . For each integer k ≥ k 0 , define the stopping time where throughout this paper we set inf ∅ = ∞. Clearly, τ k is increasing as k → ∞ and τ ∞ = lim k→∞ τ k ≤σ ∞ a.s. If we can show that τ ∞ = ∞, thenσ ∞ = ∞ a.s. and the assertion (i) follows. We now show assertion (19). It follows from (24) that Letting k → ∞ and then using the Fubini theorem, we get Dividing both sides by ρ 1 t and then letting t → ∞, we see lim sup which is the desired assertion (19). Choose a positive constant sufficiently small for < ρ 1 c 2 and ≤ 1. By the generalized Itô formula again, we have that for any t ≥ 0, By (12) and (22), we then have By 0 < θ < 1 and ≤ 1, we can get ( − 1 + θ)/θ ≤ , so that and similarly, thus,we can get where D : R + → R is defined by By (26), we can obtain that Letting k → ∞, we have which yields The proof is complete. Theorem 3.5: Let all the conditions in Lemma 3.3 hold and, moreover, β i1 = 0 for all i ∈ S. Then the unique global solution x(t) of the PSDE (1) has the property that Proof: Noting that c 3 = 0 in (11) given that β i1 = 0 for all i ∈ S. Hence, (11) becomes It is then easy to show by the generalized Itô formula that Letting t → ∞ yields assertion (30). (7) is replaced by Theorem 3.6: Let all the conditions in Lemma 3.3 hold except condition and, moreover, β i1 = 0 for all i ∈ S. Then there is a positive number δ such that for any initial data x(0) = x 0 , the unique global solution x(t) of the PSDE (1) satisfies lim sup and lim sup Proof: In the same way that (11) was proved, we can show from (13) and (16) that This implies Let δ > 0 be sufficiently small for and 2ρ 1 ≥ δc 2 . Applying the generalized Itô formula, we have that Two special cases and an example We will also assume that all coefficients of PSDEs in this section will satisfy the local Lipschitz condition and, moreover, q > p ≥ 2. To make our cases more understandable, we assume that the given hybrid system is described by a hybrid differential equation Assume that this given hybrid differential equation is either asymptotically stable or bounded. Its structured differences and various stochastic perturbations will be discussed in the following two cases. Case 1 Assume that for each i ∈ S 1 , there is a number b i1 < 0 such that while for each i ∈ S 2 , there ia a pair of numbers b i1 ∈ R and b i2 > 0 such that for (x, t) ∈ R n × R + . This means that the differential equation in mode i ∈ S 1 is stable but may not in mode i ∈ S 2 . In order for the hybrid Equation (37) to be stable, we assume moreover that is a nonsingular M-matrix. It is then known (see, e.g. Hu, Mao, & Zhang, 2013) that Equation (37) is exponentially stable in pth moment. Suppose that Equation (37) is subject to a stochastic perturbation and the perturbed system is described by and the perturbation has its structured difference in the sense that for (y, t) ∈ R n × R + , where b i3 > 0 for all i ∈ S. We wish to obtain upper bounds on b i3 's for the perturbed system (39) to remain stable. Noting that for i ∈ S 1 we see that Assumption 2.3 is satisfied with Hence the matrix A defined by (2) is the same as the matrix A defined by (38) and hence A is a nonsingular M-matrix. Moreover, the matrix B defined by (3) becomes which is a nonsingular M-matrix too by Lemma 2.1. We choose ρ by (10), so condition (6) is satisfied by Remark 3.2. Compute λ i 's and by ζ i 's by (4) and (5), respectively. Conditions (7)-(9) yield the following bounds for i ∈ S 2 . (42) By Theorems 3.5 and 3.6, we can therefore conclude that if the perturbed parameters b i3 satisfy (41) and (42), then the PSDE (39) is both mean square and almost surely exponentially stable. Case 2 In this case we will discuss the robust boundedness. Assume that and Suppose that the perturbed system is described by and the perturbation coefficients satisfy where b i3 and b i5 are all nonnegative numbers. We aim to obtain upper bounds on them so that the perturbed system (45) remains asymptotically bounded. It follows from these conditions that for i ∈ S 1 As a result, Assumption 2.3 is satisfied with (4) and (5), respectively. Conditions (7)-(9) then By Theorems 3.4, we can therefore conclude that if the perturbed parameters b i3 satisfy (48) and (49), then the solution x(t) of the PSDE (45) has properties (19) and (20). We can therefore conclude that if the perturbed parameters σ i satisfy (51) where H 1 and H 2 are positive constants. To perform a computer simulation for the solution, we set σ 1 = σ 2 = 0.15, σ 3 = σ 4 = 0.07,σ 5 = 0.7, σ 6 = 0.7, x(0) = 1 and r(0) = 1. The computer simulations in Figure 1 show a single sample path of the Markov chain and that of the solution, from which we can see how the Markov chain jumps from one mode to another and also the solution evolves in a bounded domain. Conclusion In this paper, we have discussed robust stability and boundedness for highly nonlinear hybrid PSDEs with different structures. We have also discussed two special cases and an example to illustrate our theory. Disclosure statement No potential conflict of interest was reported by the authors.
3,255
2019-08-08T00:00:00.000
[ "Mathematics", "Engineering" ]
A systematic review of the cost-effectiveness of maternity models of care Objectives In this systematic review, we aimed to identify the full extent of cost-effectiveness evidence available for evaluating alternative Maternity Models of Care (MMC) and to summarize findings narratively. Methods Articles that included a decision tree or state-based (Markov) model to explore the cost-effectiveness of an MMC, and at least one comparator MMC, were identified from a systematic literature review. The MEDLINE, Embase, Web of Science, CINAHL and Google Scholar databases were searched for papers published in English, Arabic, and French. A narrative synthesis was conducted to analyse results. Results Three studies were included; all using cost-effectiveness decision tree models with data sourced from a combination of trials, databases, and the literature. Study quality was fair to poor. Each study compared midwife-led or doula-assisted care to obstetrician- or physician-led care. The findings from these studies indicate that midwife and doula led MMCs may provide value. Conclusion The findings of these studies indicate weak evidence that midwife and doula models of care may be a cost-effective or cost-saving alternative to standard care. However, the poor quality of evidence, lack of standardised MMC classifications, and the dearth of research conducted in this area are barriers to conclusive evaluation and highlight the need for more research incorporating appropriate models and population diversity. Supplementary Information The online version contains supplementary material available at 10.1186/s12884-023-06180-6. Background Recognition of women's diverse needs, circumstances and preferences has resulted in large investments internationally to expand the range and accessibility of models of maternity care (MMC) [1][2][3].MMCs are care pathways pregnant women engage in for their maternity care, guiding the level and type of care provided to a woman during pregnancy, birth and the postpartum period [4].At least 10 distinct MMCs are available internationally in high income countries, and these can be categorized into five broad groups: Standard Care delivered by a large team of obstetricians and midwives; General Practitioner (GP) or Family Physician Shared Care with support from obstetricians as required; Midwife-led Continuity Care with support from obstetricians as required; Private Obstetric-led Continuity Care; and Private Midwife-led Continuity Care, each organised individually by the pregnant woman [5][6][7].Features of each group are described in Table 1, outlining who leads the care and usual location of care.While broad categories exist, and efforts have been made to develop a standardised classification system [4], varied and inconsistent terminologies and definitions around MMCs remain an impediment to adequate evaluation of available MMCs [7].Funding of MMCs varies internationally, with universal health care or public health insurance funding many MMCs in countries including Australia, the United Kingdom and Netherlands [8,9].Private health insurance, supplemented by Affordable Care Act funding and out-of-pocket fees, finances MMCs in the United States [10,11]. A major challenge for health systems is that it is not sustainable to continue to expand access to a wider range of MMCs.Ill-informed expansion would create inefficiencies in both a free or government regulated market.Inefficiencies would arise because of an excess supply of an inappropriate mix of services that do not meet the demand for different MMCs.This situation may result in 'too little care, too late' or 'too much, too soon'; and alongside the high costs of establishing maternity care, would be inefficient for the health system [12,13].However, there is justification for selective expansion of MMCs that are cost-effective.Funding of MMCs that improve health outcomes and/or costs relative to alternatives is likely to free resources for other types of maternity services that are justified on equity and access grounds, creating a fair health system.It is therefore critical that decision makers have a thorough understanding of the cost-effectiveness of MMCs. Existing evidence for the costs alone, or cost-effectiveness of MMCs is limited especially for team and caseload midwife-led continuity MMCs [14,15].Studies that have examined both costs and health outcomes of various MMCs have generally not used economic modelling methods which establish a generalizable framework for future research and service evaluations.The available evidence on costs and outcomes of midwife-led continuity models versus other MMCs reported in randomized controlled trials has been well synthesized in a Cochrane review [16].Four studies included in this Cochrane review examined costs alongside maternal and neonatal clinical outcomes [17][18][19][20] but did not provide a combined measure of costs and health outcomes such as an incremental cost-effectiveness ratio, nor use cost-effectiveness modelling methods.Doing so would have betterinformed decision makers as to the real-world economic costs of a full clinical pathway associated with different MMCs, rather than the not generalizable conclusion of the Cochrane review that midwife-led continuity MMCs may be cost-saving [16].Model-based economic evaluations were also found to be rare by authors of an Australian review that focused on midwife-led continuity MMCs for women with high-risk pregnancies [21]; and uncommon in a systematic review of the cost-effectiveness of midwife-led care in the United Kingdom [13]. Another limitation of existing literature is studies evaluating the cost-effectiveness of place of birth being perceived as equivalent to an evaluation of MMCs [22][23][24].While place of birth is often unique to individual MMCs, the available evaluations do not examine the full pregnancy and postpartum continuum where approaches to care, and therefore adverse events and costs, may differ between MMCs.It is important to evaluate the full range of MMCs available internationally, rather than focusing on one aspect such as place of birth or one profession.While there is clear evidence that midwife-led continuity care MMCs result in better short-term clinical outcomes for low-risk mothers and neonates, such as fewer caesarean sections and admissions to neonatal intensive care units [20,25,26], decision makers can only design an efficient mix of MMCs when all options are evaluated and produce a useful measure of benefit that combines both costs and outcomes into an incremental cost-effectiveness ratio or equivalent.We suspect there is little useful and relevant evidence for the cost-effectiveness of MMCs that can inform maternity care reform internationally.In the 'place of birth evaluation' conducted by Henderson et al., the authors acknowledge the dearth of model-based economic evaluations examining MMCs more widely [22] which is supported by the more recent Australian review [21]. Aims The aims of this systematic review were to determine the extent of cost-effectiveness evidence available for evaluating alternative MMCs and to narratively summarize findings for their comparative cost-effectiveness.We intended to identify gaps in knowledge that may prohibit cost-effectiveness analysis being used to support maternity services in their allocation of resources to MMCs internationally. Registration and reporting The protocol for this systematic review has been registered with PROSPERO: CRD42021223334.Our reporting is guided by the 2020 PRISMA statement [27]. Search strategy The search strategy was developed collaboratively, with consensus from the review team and input from an experienced librarian.We performed thorough and systematic searches in Embase, MEDLINE, Web of Science, and Google Scholar (from which only the first 200 references were included).These databases were expected to capture 98.3% of relevant studies [28].For completeness, we also searched the Cumulative Index of Nursing and Allied Health Literature (CINAHL Complete via EBSCOhost) [28].Additionally, the reference lists of studies included in full-text screening and relevant systematic reviews were hand-searched. The search strategy included keywords and subject headings related to models of maternity care (e.g., private obstetric care, birth centre care, midwifery group practice) and economic evaluation research (e.g., costeffectiveness, cost-benefit, cost-utility).Searches were adapted for appropriate use in each database. The search was restricted to papers published in English, Arabic and French.Studies published from 01/01/2000 until 23/11/2020 were sought to ensure both a broad search and more recent costing estimates amongst included studies.Searches were repeated across all databases on 30/12/2022 to include the years 2021 and 2022 before the final analysis, and newly published studies considered for inclusion.The full search strategy will be available on PROSPERO upon publication of this review. Eligibility criteria To be included in this review, papers had to report on findings from cost-effectiveness modelling studies that used a decision tree or state-based (Markov) model to explore the cost-effectiveness of an MMC and at least one comparative MMC.We only sought modelling studies because they are excellent at comparing all the relevant alternatives that a decision maker is considering, simplifying reality where a 'real life' randomized-controlled trial cannot be replicated, such as randomly allocating women to different MMCs [29].Models also have the flexibility to use multiple sources of data, ensuring the best available data informs decision making.Measuring single clinical outcomes in trial-based economic evaluations are only justifiable where there is good reason to believe that the change will not also have long term effects on quality of life and decision makers are not interested in other relevant intervention outcomes, which is not appropriate for maternity care.Models also have an advantage over clinical trial-based economic evaluations in that they can be easily adjusted to other settings such as regular practice and other geographical locations [29]. Studies which solely focused on specific interventions during pregnancy were excluded.Location or wardbased studies that looked at the impact of midwifeled versus obstetrician-led wards or similar were also excluded as there was a consensus between the reviewers that admission to either type of ward does not necessarily indicate the women's affiliation with either model of care; a woman could have been following a midwifeled model but assigned to an obstetrician-led ward and their outcomes may not necessarily be a reflection of obstetrician-led care in a ward.Women in the included studies needed to have remained in a single MMC across the continuum of pregnancy, birth and the postpartum period.If other specialists were involved in care and the lead clinician or model of care structure continued, these studies were eligible for inclusion.If it was not reported that women moved between MMCs, either in reality or hypothetically in the cost-effectiveness model, then the studies were included. Study selection Recalled citations were exported to EndNote.Duplicates were removed.Two independent reviewers EM and BA screened screen titles and abstracts against eligibility criteria.Conflicts were resolved through discussion.EM and BA then both independently screened all full-text papers and reached a consensus on the included studies. Outcome measures The primary outcome was an incremental cost-effectiveness ratio representing the change in costs and change in health outcomes observed when comparing one MMC to another.As a secondary outcome, we were also interested in whether model uncertainty had been quantified. Data extraction analysis Data extraction was conducted using Excel.Extracted information included the MMCs evaluated, the costeffectiveness modelling approach used with relevant parameters and assumptions, reported incremental costeffectiveness ratio, the selected cost-effectiveness threshold, the reported costs and health outcomes for each evaluated MMC, and sensitivity analysis results.These parameters were decided upon following the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement [30], and the authors' expertise. Quality assessment Quality of included studies was assessed using the Joanna Briggs Institute (JBI) Checklist for Economic Evaluations [31].Included studies were categorised as 'good' , 'fair' and 'poor' using the criteria described in the protocol.Since this was a narrative synthesis review and no papers were to be excluded based on their methodological quality, the authors deemed it appropriate to have one reviewer (BA) appraise the studies with thorough cross checking from EM, and conflicts resolved through discussion. Narrative data synthesis A narrative synthesis was conducted, rather than a meta-analysis because of: the range of MMCs delivered internationally and therefore evaluated; the variation in potential primary outcome measures; and the expected small number of good quality economic evaluations.In the synthesis, we describe the included studies and report the cost-effectiveness and any sensitivity analysis results. Results We identified 3142 potential studies for inclusion, of which 2533 remained after the removal of duplicates.We excluded 2509 studies during title and abstract screening, and the remaining 24 papers underwent full-text screening.From these, three studies were included for data extraction and synthesis (see Fig. 1).All three studies were cost-effectiveness decision tree models, with data sourced from a combination of trials, databases and the literature.Excluded studies following abstract screening and full-text assessment for eligibility with reasons are in an additional file (see Additional file 1). The quality of the included studies ranged from poor [32] to fair [33,34], with none being rated as good.The poor-quality study did not report costs and health outcomes clearly, nor were costs converted to a single consistent year [32].They also failed to report how potential confounding factors were considered during analysis, such as maternal health and socioeconomic factors, that may have greatly impacted the external validity and subsequent conclusions of the study.The key limitation of the two studies assessed as fair was that they did not make relevant assumptions that are key for decision making and policy.For example, one study assumed a single cost for neonatal intensive care, whereby in reality it would depend on length of stay [33].The primary outcome of this study -neonatal intensive care avoided -does not represent the quality of life of the parents.Such an outcome also potentially undervalues the baby's life while admitted, as the outcome is binary and does not represent a family's journey through intensive care.The other fair study did not distinguish between emergency and elective caesarean section outcomes which incur very different maternal quality of life outcomes [34].Quality assessment ratings for the three studies are available in Table 2. The three studies were conducted in Canada [33], and the United States [32,34] (Table 2).In the Canadian study [33], the cost-effectiveness of a family physician versus midwife-led MMC was evaluated between 2013 and 2017.Decision tree modelling was conducted from the perspective of the province (state) of Nova Scotia, as the government was evaluating the new midwife led MMC for low-risk pregnancies.Avoiding Neonatal Intensive Care Unit (NICU) admission was the primary health outcome, while costs were estimated from standard public health insurance costs associated with maternity care activity logged in hospital administrative databases.The incremental cost-effectiveness ratio of midwife-led care was C$ 27,502 (US$22,090) per NICU case avoided.The authors used a cost-effectiveness threshold of C$50,000 (US$40,160) per NICU cases avoided and the midwifeled care was therefore deemed cost-effective, but not cost-saving. In one of the studies conducted in the United States [32], obstetrician-led care was compared to midwife-led care for low-risk women between 2011 and 2012.Decision tree modelling was conducted from the perspective of health funders including the public Medicaid program and private health insurers.The primary health outcome was obstetric procedures during birth such as epidural analgesia, labour induction, caesarean birth and episiotomy.The best available health data was synthesised for the decision tree model: a combination of data from a national cross-sectional dataset of women and a systematic review [16].Costs were sourced from reports of private and public care costs in the United States.While no incremental cost-effectiveness ratio was reported, the authors concluded that a shift from obstetrician led care to midwife led care could be cost saving in the United States. The second study conducted in the United States [34] was a decision tree modelling evaluation of standard maternity care compared to standard care plus doula support in the upper Midwest between 2010 and 2014.A doula is a non-medical companion who can provide support before, during and after birth.The aim of this study was to model the potential cost-effectiveness of Medicaid-funded doula services.Standard maternity care was not defined in the included study, but we assumed was obstetrician-led with the women being cared for by a variety of different obstetricians and midwives during the care continuum, with delivery in a Medicaid-funded hospital.This model is similar to standard maternity Medicaid-funded care described elsewhere [11].While doula services are not internationally regarded as a type of MMC, this study was included as a potentially novel additional MMC which is different again to midwife-led continuity of care.The primary health outcome was preterm birth (< 37 weeks) averted and health data for the model was synthesised from multiple sources: a national survey; maternity care activity database; government pre-term birth data; and data from women and the doulas who participated in the doula trial.Costs of care were sourced from Medicaid government reports.An incremental cost-effectiveness ratio was not reported, however a scatterplot in the included study suggested that standard care with a Medicaid-funded doula would be cost saving based on deterministic modelling. All three included studies used probabilistic sensitivity analysis to quantify uncertainty in the models.Midwife led MMCs in Canada had an 83% probability of being cost-effective.In this study, scenarios of increasing costs of care, and sub-group analyses across urban and rural areas in Canada were also assessed.The cost-effectiveness results remained below the set cost-effectiveness threshold in these scenarios [33].In the authors' sensitivity analysis for the United States study comparing midwife led and obstetric led care [32], 95% prediction intervals were reported for both outcomes of costs and obstetric procedures used in the decision tree model.Costs had narrow prediction intervals: C$28,457-C$30,936 for obstetrician led care, and C$25,426-C$29,108 for midwife led care.Wider prediction intervals were reported for health outcomes of preterm births, planned caesarean section, epidural and episiotomy [32].In the scenario analysis, the authors tested two different increases in the volume of births cared for in a midwife led MMC: a 10 percentage-point increase and an increase from 8.9 to 40%.Costs savings increased for each scenario.An increase in the proportion of midwife led care from 8.9 to 40% would yield annual cost savings of US$539 million for public funders, and a similar shift toward midwife led care would save the private health sector US$1.35 billion.Adding doula support to standard care in the second United States study had a 73.3% probability of being costsaving, as reported in the sensitivity analysis [34]. Discussion Through this systematic review, we identified three studies that examined cost-effectiveness of different MMCs in low-risk pregnancies using decision tree modelling.Each study compared midwife-led or doula-assisted care to obstetrician-or physician-led care.All studies concluded that midwife and doula-assisted models of care would be cost-effective or cost-saving.Costs were estimated from public reports of healthcare costs and existing literature, and often generalised disparate treatment types into single cost estimations.In all studies, low risk pregnancies were treated as a homogenous group, which may impede findings of true cost savings in particular population segments.Overall, the quality of included studies was poor to fair which impacts the interpretation of the studies' results for other settings.All studies had significant methodological limitations.One study [32] did not record any medical or demographic data of participants.Along with the self-selection bias inherent in observational research of this type of medical care, potential confounders may greatly influence health and cost outcomes.The other two studies attempted to control for confounders, but the findings show that the midwife-led cohorts did have lower rates of risk factors for poor birth outcomes such as obesity, smoking, hypertension and diabetes [33,34]. The findings of these studies indicate that there is weak evidence that midwife and doula models of care may be a cost-effective or cost-saving alternative to standard care.However, the low quality of evidence, lack of health and demographic data, self-selecting bias, and inappropriate cost measurement procedures and assumptions, mean that further research will need to be conducted in order to determine the true economic impacts of differential models of care and identify the patient groups for which these models may be most suitable. Modelling studies for evaluating the cost-effectiveness of alternative MMCs were rare.This may be due to the difficulties associated with valuing temporary health states such as pregnancy [35].The value of health states for long term chronic conditions are well studied [36][37][38][39][40][41][42].However temporary health states where the disutility is experienced for 1 year or less with a usual return to normal health, although we acknowledge that some women never return to normal health in the post partum, is not well researched [35].It is argued [43] that estimating the value people place on states of health using time trade-off or standard gamble methods is not appropriate because of their comparison of the health state in question to death, which may be too extreme a comparison.For example, population data used to value states of health represented by EQ-5D-5L responses and using conventional time-trade off methods [43] are therefore likely to be an inaccurate value of temporary health states such as those experienced during pregnancy.Adapted methods for valuing temporary health states have been proposed [35] and we recommend further study in this area for economic evaluations of maternity services. There is also a general paucity of long-term data for maternity outcomes, such as breastfeeding outcomes and infant atopy.Few modelling studies for the economic evaluation of maternity services may also be explained by the lack of this data and challenges of including infant health outcomes in models [44].The clinical pathway of one type of person is usually represented by a model structure, and no guideline has been established for incorporating the health outcomes of both women and infants in a single model.As the mother-baby dyad is such a critical aspect of maternity care, the availability of long-term data that can populate a model examining both women's and infants' health outcomes and costs following engagement with maternity services is important to pursue. Strengths and limitations The systematic review had a number of strengths, including adherence to well-regarded reporting and quality assessment tools [30,31].We also created a thorough and focussed search strategy with a well-defined methodology for the purposes of identifying whether a research gap exists in terms of available cost-effectiveness evidence to inform decisions to expand MMCs.The authors were able to further increase the validity of the findings by widening the breadth of the search to include Arabic and French manuscripts. The main limitation, or barrier, to this review is the lack of standardised classifications for models of maternity care.In 2022, 890 different MMCs were reported in Australia alone [45].While standardised classification systems have been developed, heterogeneity within models invalidates many of these attempts at standardisation [46].Other limitations of the review result from the lack of diversity in the MMCs being evaluated in included studies.While there are a large number of maternity care pathways to choose from, studies only included midwifeled care and doula-supported care, comparing these to obstetrician or family physician-led care.Studies also lacked geographic diversity, with two conducted in the United States of America, and one in Canada.Therefore, the findings can only be generalised to these two nations, with generalisability not possible between these two even.Data relating to the types and costs of MMCs in countries with different culture and health system structure to North America, and the economic consequences of prioritising specific models, is yet to be identified or evaluated. Conclusions In this systematic review we identified three studies that used decision tree modelling to determine the cost-effectiveness of alternative MMCs.Few studies use appropriate and rigorous cost-effectiveness modelling to enhance the strength of evidence evaluating MMCs.Findings from the limited number of studies available for this review consistently indicate that midwife-and doula-led MMCs provide value.More confident conclusions were prevented by low study quality, and limited generalizability of results.Further research could better reflect the complexity and diversity of the MMC servicedelivery landscape and the range of outcomes across the mother-baby dyad.Lack of standardised nomenclature for MMCs is an additional impediment to building evidence in this area.Future cost-effectiveness studies evaluating MMCs should explore new methods that measure mother-baby dyad outcomes, and account for the diversity of the MMCs internationally. Table 1 International maternity model of care categories and features a Often called team, caseload/group Table 2 Summary of included studies' features and results
5,387.2
2023-12-13T00:00:00.000
[ "Medicine", "Economics" ]